CS4FN Advent – Day 18: cracker or hacker? Cyber security

It’s Day 18 of the CS4FN Christmas Computing Advent Calendar and also the last day for 2nd class Christmas post to reach people in the UK, but you’ve got until Tuesday the 21st for first class post.

We’ve been posting a computing-themed article linked to the picture on the ‘front’ of the advent calendar for the last 17 days and today is no exception. The picture is of a Christmas cracker so today’s theme is going to be computer hacking and cracking – all about Cyber Security.

If you’ve missed any of our previous posts, please scroll to the end of this one where we have a full list.

A cracker, ready to pop

 

The terms ‘cracker’ and ‘hacker’ are often used interchangeably to refer to people who break into computers though generally the word hacker also has a friendlier meaning – someone who uses their skills to find a workaround or a solution (e.g. ‘a clever hack’) whereas a cracker is probably someone who shouldn’t be in your system and is up to no good. Both people can use very similar skills though – one is using them to benefit others, the other to be benefit themselves.

We have an entire issue of the CS4FN magazine all about Cyber Security – it’s issue 24 and is called ‘Keep Out’ but we’ll let you in to read it. All you have to do is click on this very secret link, then click on the magazine’s front cover to download the PDF. But don’t tell anyone else…

Both the articles below were originally published in the magazine as well as on the CS4FN website.

 

Piracy on the open Wi-fi

by Jane Waite, Queen Mary University of London. This article was originally published on the CS4FN website.

You arrive in your holiday hotel and ask about Wi-Fi. Time to finish off your online game, connect with friends, listen to music, kick back and do whatever is your online thing. Excellent! The hotel Wi-Fi is free and better still you don’t even need one of those huge long codes to access it. Great news, or is it?

Pirate flag and wifi picture adapted from an image by OpenClipart-Vectors from Pixabay

You always have to be very cautious around public Wi-Fi whether in hotels or cafes. One common attack is for the bad guys to set up a fake Wi-Fi with a name very similar to the real one. If you connect to it without realising, then everything you do online passes through their computer, including all those user IDs and passwords you send out to services you connect to. Even if the passwords they see are encrypted, they can crack them offline at their leisure.

Things just got more serious. A group has created a way to take over hotel Wi-Fi. In July 2017, the FireEye security team found a nasty bit of code, malware, linked to an email received by a series of hotels. The malware was called GAMEFISH. But this was no game and it certainly had a bad, in fact dangerous, smell! It was a ‘spear phishing’ attack on the hotel’s employees. This is an attack where fake emails try to get you to go to a malware site (phishing), but where the emails appear to be from someone you know and trust.

Once in the hotel network, so inside the security perimeter, the code searched for the machines running the hotel’s Wi- Fi and took them over. Once there they sat and watched, sniffing out passwords from the Wi-Fi traffic: what’s called a man-in-the-middle attack.

The report linked the malware to a very serious team of Russian hackers, called FancyBear (or APT28), who have been associated with high profile attacks on governments across the world. GAMEFISH used a software tool (an ‘exploit’) called EternalBlue, along with some code that compiled their Python scripts locally, to spread the attack. Would you believe, EternalBlue is thought to have been created by the US Government’s National Security Agency (NSA), but leaked by a hacker group! EternalBlue was used in the WannaCry ransomware too. This may all start to sound rather like a farfetched thriller but it is not. This is real! So think before you click to join an unsecured public Wi-Fi.

 

 

Just between the two of us: mentalism and covert channels

by Peter W McOwan, Queen Mary University of London. This article was originally published on the CS4FN website.

Secret information should stay secret. Beware ‘covert channels’ though. They are a form of attack where an illegitimate way of transferring information is set up. Stopping information leaking is a bit like stopping water leaking – even the smallest hole can be exploited. Magicians have been using covert channels for centuries, doing mentalism acts that wow audiences with their ‘telepathic’ powers.

Illusionist image by Andrei Cássia from Pixabay

The secret codes of Mentalism

In the 1950’s Australian couple Sydney and Lesley Piddington took the entertainment world by storm. They had the nation perplexed, puzzled and entertained. They were seemingly able to communicate telepathically over great distances. It all started in World War 2 when Sydney was a prisoner of war. To keep up morale, he devised a mentalism act where he ‘read the minds’ of other soldiers. When he later married Lesley they perfected the act and became an overnight sensation, attracting BBC radio audiences of 20 million. They communicated random words and objects selected by the audience, even when Lesley was in a circling aeroplane or Sydney was in a diving bell in a swimming pool. To this day their secret remains unknown, though many have tried to work it out. Perhaps they used a hidden transmitter. After all that was fairly new technology then. Or perhaps they were using their own version of an old mentalism trick: a code to transmit information hidden in plain sight.

Sounds mysterious

Sydney had a severe stutter, and some suggested it was the pauses he made in words rather than the words themselves that conveyed the information. Using timing and silence to code information seems rather odd, but it can be used to great effect.

In the phone trick ‘Call the wizard’, for example, a member of the audience chooses any card from a pack. You then phone your accomplice. When they answer you say “I have a call for the wizard”. Your friend names the card suits: “Clubs … spades … diamonds … hearts”. When they reach the suit of the chosen card you say: “Thanks”.

Your phone friend now knows the suit and starts counting out the values, Ace to King. When they reach the chosen card value you say: “Let me pass you over”. Your accomplice now knows both suit and value so dramatically reveals the card to the person you pass the phone to.

This trick requires a shared understanding of the code words and the silence between them. When combined with the background count, information is passed. The silence is the code.

Timing can similarly be used by a program to communicate covertly out of a secure network. Information might be communicated by the time a message is sent rather than its contents, for example

Codes on the table

Covert channels can be hidden in the existence and placement of things too. Here’s another trick.

The receiving performer leaves the room. A card is chosen from a pack by a volunteer. When the receiver arrives back they are instantly able to tell the audience the name of the card. The secret is in the table. Once the card has been selected, pack and box are replaced on the table. The agreed code might be:

If the box is face up and its flap is closed: Clubs.
If the box is face up and its flap is open: Spades.
If the box is face down and its flap is closed: Diamonds.
If the box is face down and its flap is open: Hearts.

That’s the suits taken care of. Now for the value. The performers agree in advance how to mentally chop up the card table into zones: top, middle and bottom of the table, and far right, right, left and far left. That’s 3 x 4 unique locations. 12 places for 12 values. The pack of cards is placed in the correct pre-agreed position, box face up or not, flap open or closed as needed. What about the 13th possibility? Have the audience member hold their hand out flat and leave the cards on it for them to ‘concentrate’ on.

Again a similar idea can be used as a covert channel to subvert a security system: information might be passed based on whether a particular file exists or not, say.

Making it up as you go along

These are just a couple of examples of the clever ideas mentalists have used to amaze and entertain audiences with feats of seemingly superhuman powers. Our cs4fn mentalism portal has more. Some claim they have the powers for real, but with two dedicated performers and a lot of cunning memory work, it’s often hard to decipher performers’ methods. Covert channels can be similarly hard to spot.

Perhaps the Piddingtons secret was actually a whole range of different methods. Just before she died Lesley Piddington is said to have told her son, “Even if I wanted to tell you how it was done, I don’t think I would be able”. How ever it was done, they were using some form of covert channel to cement their place in magic history. As Sydney said at the end of each show “You be the judge”.

 

Answers to yesterday’s bumper puzzle compendium

CS4FN Christmas Computing Advent Calendar – Answers

 

Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021)

 

CS4FN Advent – Day 8: gifts, and wrapping – Tim Berners-Lee, black boxes and another computing puzzle (8 December 2021)

 

CS4FN Advent – Day 9: gingerbread man – computing and ‘food’ (cookies, spam!), and a puzzle (9 December 2021)

 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses – (10 December 2021)

 

CS4FN Advent – Day 11: the proof of the pudding… mathematical proof (11 December 2021)

 

CS4FN Advent – Day 12: Computer Memory – Molecules and Memristors – (12 December 2021)

 

CS4FN Advent – Day 13: snowflakes – six-sided symmetry, hexahexaflexagons and finite state machines in computing (13 December 2021)

 

CS4FN Advent – Day 14 – Why is your internet so slow + a festive kriss-kross puzzle (14 December 2021)

 

CS4FN Advent – Day 15 – a candle: optical fibre, optical illusions (15 December 2021)

 

CS4FN Advent – Day 16: candy cane or walking aid: designing for everyone, human computer interaction (16 December 2021)

 

CS4FN Advent – Day 17: reindeer and pocket switching (17 December 2021)

 

 

CS4FN Advent – Day 18: cracker or hacker? Cyber security(18 December 2021) – this post

 

 

 

CS4FN Advent – Day 16: candy cane or walking aid: designing for everyone, human computer interaction

Welcome to Day 16 of the CS4FN Christmas Computing Advent Calendar in which we’re posting a blog post every day in December until (and including) Christmas Day.

We’re celebrating the breadth of computing research and also the history of CS4FN, a project which has been distributing free magazines to subscribing UK schools since 2005 (ask your teacher to subscribe for next year’s magazine).

Today’s advent calendar picture is of a candy cane which made me think both of walking aids and of support sticks that alert others that the person using it is blind or visually impaired.

A white candy cane with green and red stripes.

We’ve worked with several people over the years to write about their research into making life easier for people with a variety of disabilities. Issue 19 of our magazine (“Touch it, feel it, hear it!”) focused on the DePiC project (‘Design Patterns for Inclusive Collaboration’) which included work on helping visually impaired sound engineers to use recording studio equipment, and you can read one of the articles (see ‘2. The Haptic Wave’) from that magazine below.

Our most recent CS4FN magazine (issue 27, called “Smart Health: decisions, decisions, decisions“) was about Bayesian mathematics and its use in computing, but one of those uses might be an app with the potential to help people with arthritis get medical support when they most need it (rather than having to wait until their next appointment) – download the magazine by clicking on its title and scroll to page 16 & 17 (p9 of the 11 page PDF). Our writing also supports the (obvious) case, that disabled people must be involved at the design and decision-making stages.

 

1. Design for All (and by All!)

by Paul Curzon, QMUL. This article was originally published on the CS4FN website.

Making things work for everyone

Designing for the disabled – that must be a niche market mustn’t it? Actually no. One in five people have a disability of some kind! More surprising still, the disabled have been the inspiration behind some of the biggest companies in the world. Some of the ideas out there might eventually give us all super powers.

Just because people have disabilities doesn’t mean they can’t be the designers, the innovators themselves of course. Some of the most innovative people out there were once labelled ‘disabled’. Just because you are different doesn’t mean you aren’t able!

Where do innovators get their ideas from? Often they come from people driven to support people currently disadvantaged in society. The resulting technologies then not only help those with disabilities but become the everyday objects we all rely on. A classic example is the idea of reducing the kerbs on pavements to make it possible for people in wheelchairs to get around. Turns out of course that they also help people with pushchairs, bikes, roller-blades and more. That’s not just a one-off example, some of the most famous inventors and biggest companies in the world have their roots in ‘design for all’.

Designing for more extreme situations pushes designers into thinking creatively, thinking out of the box. That’s when totally new solutions turn up. Designing for everyone is just a good idea!

2. Blind driver filches funky feely sound machine! The Haptic Wave

by Jane Waite, QMUL. This article was originally published on the CS4FN website.

The blind musician Joey Stuckey in his recent music video commandeers then drives off in a car, and yes he is blind. How can a blind person drive a car, and what has that got to do with him trying to filch a sound machine? So maybe taking the car was just a stunt, but he really did try and run off with a novel sound machine!

As well as fronting his band Joey is an audio engineer. Unlike driving a car, which is all about seeing things around you – signs, cars pedestrians – being an audio engineer seems a natural job for someone who is blind. Its about recording, mixing and editing music, speech and sound effects. What matters most is that the person has a good ear. Having the right skills could easily lead to a job in the music industry, in TV and films, or even in the games industry. It’s also an important job. Getting the sound right is critical to the experience of a film or game. You don’t want to be struggling to hear mumbling actors, or the sound effects to drown out a key piece of information in a game.

Peter Francken in his studio. Image from Wikimedia Commons.

Mixing desks

Once upon a time Audio engineers used massive physical mixing desks. That was largely ok for a blind person as they could remember the positions of the controls as well as feel the buttons. As the digital age has marched on, mixing desks have been replaced by Digital Audio Workstations. They are computer programs and the trouble is that despite being about sound, they are based on vision.

When we learn about sound we are shown pictures of wavy lines: sound waves. Later, we might use an oscilloscope or music editing software, and see how, if we make a louder sound, the curves get taller on the screen: the amplitude. We get to hear the sound and see the sound wave at the same time. That’s this multimodal idea again, two ways of sensing the same thing.

But hang on, sound isn’t really a load of wavy lines curling out of our mouths, and shooting away from guitar strings. Sound is energy and atoms pushing up against each other. But we think of sound as a sound wave to help us understand it. That’s what a computer scientist calls abstraction: representing things in a simpler way. Sound waves are an abstraction, a simplified representation, of sound itself.

Sound waveform image by Gordon Johnson from Pixabay

The representation of sound as sound waves, as a waveform, helps us work with sound, and with Digital Audio Workstations it is now essential for audio engineers. The engineer works with lines, colors, blinks and particularly sound waves on a screen as they listen to the sound. They can see the peaks and troughs of the waves, helping them find the quiet, loud and distinctive moments of a piece of music, at a glance, for example. That’s great as it makes the job much easier…but only if you are fully sighted. It makes things impossible for someone with a visual impairment. You can’t see the sound waves on the editing screen. Touching a screen tells you nothing. Even though it’s ultimately about sounds, doing your job has been made as hard as driving a car. This is rather sad given computers have the potential to make many kinds of work much more accessible to all.

Feel the sound

The DePIC research team, a group of people from Goldsmiths, Queen Mary University of London and Bath Universities with a mission to solve problems that involve the senses, decided to fix it. They’ve created the first ever plug-in software for professional Digital Audio Workstations that makes peak level meters completely accessible. It uses ‘sonification’: it turns those visual signals in to sound! decided to fix the problems. They brought together Computer Scientists, Design experts, and Cognitive Scientists and most importantly of all audio engineers who have visual impairments. Working together over two years in workshops sharing their experiences and ideas, developing, testing and improving prototypes to figure out how a visually impaired engineer might ‘see’ soundwaves. They created the HapticWave, a device that enables a user to feel rather than see a sound wave.

The HapticWave

The HapticWave combines novel hardware and software to provide a new interface to the traditional Digital Audio Workstation. The hardware includes a long wooden box with a plastic slider. As you move the slider right and left you move forward and backwards through the music. On the slider there is a small brass button, called a fader. Tiny embossed stripes on the side of the slider let you know where the fader is relative to the middle and ends of the slider. It moves up and down in sync with the height of the sound wave. So in a quiet moment the fader returns to the centre of the slider. When the music is loud, the fader zooms to the top of the handle. As you slide forwards and backwards through the music the little button shoots up and down, up and down tracing the waveform. You feel its volume changing. Music with heavy banging beats has your brass button zooming up and down, so mind your fingers!

So back to the title of the article! Joey trialled the HapticWave at a research workshop and rather wanted to take one home, he loved it so much he jokingly tried distracting the researchers to get one. But he didn’t get away with it – maybe his getaway car just wasn’t fast enough!

3. An audio illusion, and an audiovisual one

This one-minute video illustrates an interesting audio illusion, demonstrating that our brains are ‘always using prior information to make sense of new information coming in’.

The McGurk Effect

You can read more about the McGurk effect on page 7 of issue 5 of the CS4FN magazine, called ‘The Perception Deception‘.

 

4. Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021)

 

CS4FN Advent – Day 8: gifts, and wrapping – Tim Berners-Lee, black boxes and another computing puzzle (8 December 2021)

 

CS4FN Advent – Day 9: gingerbread man – computing and ‘food’ (cookies, spam!), and a puzzle (9 December 2021)

 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses – (10 December 2021)

 

CS4FN Advent – Day 11: the proof of the pudding… mathematical proof (11 December 2021)

 

CS4FN Advent – Day 12: Computer Memory – Molecules and Memristors – (12 December 2021)

 

CS4FN Advent – Day 13: snowflakes – six-sided symmetry, hexahexaflexagons and finite state machines in computing (13 December 2021)

 

CS4FN Advent – Day 14 – Why is your internet so slow + a festive kriss-kross puzzle (14 December 2021)

 

CS4FN Advent – Day 15 – a candle: optical fibre, optical illusions (15 December 2021)

 

CS4FN Advent – Day 16: candy cane or walking aid: designing for everyone, human computer interaction – this post

 

 

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!)

Welcome to Day 7 of our advent calendar. Yesterday’s post was about Printed Circuit Birds Boards, today’s theme is the Christmas robin redbreast which features on lots of Christmas cards and today is making a special appearance on our CS4FN Computing advent calendar.

A little robin redbreast.

In this longer post we’ll focus on the ways computer scientists are learning about our feathered friends and we’ll also make room for some of the bird-brained April Fools jokes in computing too.

We hope you enjoy it, and there’s also a puzzle at the end.

 

1. Computing Sounds Wild – bird is the word

Our free CS4FN magazine, Computing Sounds Wild (you can download a copy here), features the word ”bird” 60 times so it’s definitely very bird-themed.

An interest in nature and an interest in computers don’t obviously go well together. For a band of computer scientists interested in sound they very much do, though. In this issue we explore the work of scientists and engineers using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer-generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.”

 

2. Singing bird – a human choir singing birdsong

by Jane Waite, QMUL
This article was originally published on the CS4FN website and can also be found on page 15 in the magazine linked above.

“I’m in a choir”. “Really, what do you sing?” “I did a blackbird last week, but I think I’m going to be woodpecker today, I do like a robin though!”

This is no joke! Marcus Coates a British artist, got up very early, and working with a wildlife sound recordist, Geoff Sample, he used 14 microphones to record the dawn chorus over lots of chilly mornings. They slowed the sounds down and matched up each species of bird with different types of human voices. Next they created a film of 19 people making bird song, each person sang a different bird, in their own habitats, a car, a shed even a lady in the bath! The 19 tracks are played together to make the dawn chorus. See it on YouTube below.

Marcus didn’t stop there, he wrote a new bird song score. Yes, for people to sing a new top ten bird hit, but they have to do it very slowly. People sing ‘bird’ about 20 times slower than birds sing ‘bird’ ‘whooooooop’, ‘whooooooop’, ‘tweeeeet’. For a special performance, a choir learned the new song, a new dawn chorus, they sang the slowed down version live, which was recorded, speeded back up and played to the audience, I was there! It was amazing! A human performance, became a minute of tweeting joy. Close your eyes and ‘whoop’ you were in the woods, at the crack of dawn!

Computationally thinking a performance

Computational thinking is at the heart of the way computer scientists solve problems. Marcus Coates, doesn’t claim to be a computer scientist, he is an artist who looks for ways to see how people are like other animals. But we can get an idea of what computational thinking is all about by looking at how he created his sounds. Firstly, he and wildlife sound recordist, Geoff Sample, had to focus on the individual bird sounds in the original recordings, ignore detail they didn’t need, doing abstraction, listening for each bird, working out what aspects of bird sound was important. They looked for patterns isolating each voice, sometimes the bird’s performance was messy and they could not hear particular species clearly, so they were constantly checking for quality. For each bird, they listened and listened until they found just the right ‘slow it down’ speed. Different birds needed different speeds for people to be able to mimic and different kinds of human voices suited each bird type: attention to detail mattered enormously. They had to check the results carefully, evaluating, making sure each really did sound like the appropriate bird and all fitted together into the Dawn Chorus soundscape. They also had to create a bird language, another abstraction, a score as track notes, and that is just an algorithm for making sounds!

 

3. Sophisticated songbird singing – how do they do it?

by Dan Stowell, QMUL
This article was originally published on the CS4FN website and can also be found on page 14 in the magazine linked above.

How do songbirds make such complex sounds? The answer is on a different branch of the tree of evolution…
We humans have a set of vocal folds (or vocal cords) in our throats, and they vibrate when we speak to make the pitched sound. Air from your lungs passes over them and they chop up the column of air letting more or less through and so making sound waves. This vocal ‘equipment’ is similar in mammals like monkeys and dogs, our evolutionary neighbours. But songbirds are not so similar to us. They make sounds too, but they evolved this skill separately, and so their ‘equipment’ is different: they actually have two sets of vocal folds, one for each lung.

Image by Dieter_G from Pixabay

Sometimes if you hear an impressive, complex sound from a bird, it’s because the bird is actually using the two sides of their voice-box together to make what seems like a single extra-long or extra-fancy sound. Songbirds also have very strong muscles in their throat that help them change the sound extremely quickly. Biologists believe that these skills evolved so that the birds could tell potential mates and rivals how healthy and skillful they were.

So if you ever wondered why you can’t quite sing like a blackbird, now you have a good excuse!

 

4. Data transmitted on the wing

Computers are great ways of moving data from one place to another and the internet can let you download or share a file very quickly. Before I had the internet at home if I wanted to work on a file on my home computer I had to save a copy from my work computer onto a memory stick and plug it in to my laptop at home. Once I ‘got connected’ at home I was then able to email myself with an attachment and use my home broadband to pick up file. Now I don’t even need to do that. I can save a file on my work computer, it synchronises with the ‘cloud’ and when I get home I can pick up where I left off. When I was using the memory stick my rate of data transfer was entirely down to the speed of road traffic as I sat on the bus on the way to work. Fairly slow, but the data definitely arrived in one piece.

In 1990 a joke memo was published for April Fool’s Day which suggested the use of homing pigeons as a form of internet, in which the birds might carry small packets of data. The memo, called ‘IP over Avian Carriers’ (that is, a bird-based internet), was written in a mock-serious tone (you can read it here) but although it was written for fun the idea has actually been used in real life too. Photographers in remote areas with minimal internet signal have used homing pigeons to send their pictures back.

The beautiful (and quite possibly wi-fi ready, with those antennas) Victoria Crowned Pigeon. Not a carrier pigeon admittedly, but much more photogenic.  Image by Foto-Rabe from Pixabay

A company in the US which offers adventure holidays including rafting used homing pigeons to return rolls of films (before digital film took over) back to the company’s base. The guides and their guests would take loads of photos while having fun rafting on the river and the birds would speed the photos back to the base, where they could be developed, so that when the adventurous guests arrived later their photos were ready for them.

Further reading

Pigeons keep quirky Poudre River rafting tradition afloat (17 July 2017) Coloradoan.

 

5. Serious fun with pigeons

On April Fool’s Day in 2002 Google ‘admitted’ to its users that the reason their web search results appeared so quickly and were so accurate was because, rather than using automated processes to grab the best result, Google was actually using a bank of pigeons to select the best results. Millions of pigeons viewing web pages and pecking picking the best one for you when you type in your search question. Pretty unlikely, right?

In a rather surprising non-April Fool twist some researchers decided to test out how well pigeons can distinguish different types of information in hospital photographs. They trained pigeons by getting them to view medical pictures of tissue samples taken from healthy people as well as pictures taken from people who were ill. The pigeons had to peck one of two coloured buttons and in doing so learned which pictures were of healthy tissue and which were diseased. If they pecked the correct button they got an extra food reward.

Pigeon, possibly pondering people’s photographs. Image by Davgood Kirshot from Pixabay

The researchers then tested the pigeons with a fresh set of pictures, to see if they could apply their learning to pictures they’d not seen before. Incredibly the pigeons were pretty good at separating the pictures into healthy and unhealthy, with an 80 per cent hit rate.

Further reading

Principle behind Google’s April Fools’ pigeon prank proves more than a joke (27 March 2019) The Conversation.

 

6. Today’s puzzle

You can download this as a PDF to PRINT or as an editable PDF that you can fill in on a COMPUTER.

You might wonder “What do these kriss-kross puzzles have to do with computing?” Well, you need to use a bit of logical thinking to fill one in and come up with a strategy. If there’s only one word of a particular length then it has to go in that space and can’t fit anywhere else. You’re then using pattern matching to decide which other words can fit in the spaces around it and which match the letters where they overlap. Younger children might just enjoy counting the letters and writing them out, or practising phonics or spelling.

We’ll post the answer tomorrow.

7. Answer to yesterday’s puzzle

Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021) – this post

 

 

 

 

How to get a head in robotics (includes a free papercraft activity with a robot that expresses ’emotions’)

by Paul Curzon, Queen Mary University of London

EMYS robot

If humans are ever to get to like and live with robots we need to understand each other. One of the ways that people let others know how they are feeling is through the expressions on their faces. A smile or a frown on someone’s face tells us something about how they are feeling and how they are likely to react. Some scientists think it might be possible for robots to express feelings this way too, but understanding how a robot can usefully express its ‘emotions’ (what its internal computer program is processing and planning to do next), is still in its infancy. A group of researchers in Poland, at Wroclaw University of Technology, have come up with a clever new design for a robot head that could help a computer show its feelings. It’s inspired by the Teenage Mutant Ninja Turtles cartoon and movie series.

The real Teenage Mutant Ninja Turtle
Their turtle-inspired robotic head called EMYS, which stands for EMotive headY System is cleverly also the name of a European pond turtle, Emys orbicularis. Taking his inspiration from cartoons, the project’s principal ‘head’ designer Jan Kedzierski created a mechanical marvel that can convey a whole range of different emotions by tilting a pair of movable discs, one of which contains highly flexible eyes and eyebrows.

The real Emys orbicularis (European pond turtle)

Eye see
The lower disc imitates the movements of the human lower jaw, while the upper disk can mimic raising the eyebrows and wrinkling the forehead. There are eyelids and eyebrows linked to each eye. Have a look at your face in the mirror, then try pulling some expressions like sadness and anger. In particular look at what these do to your eyes. In the robot, as in humans, the eyelids can move to cover the eye. This helps in the expression of emotions like sadness or anger, as your mirror experiment probably showed.

Pop eye
But then things get freaky and fun. Following the best traditions of cartoons, when EMYS is ‘surprised’ the robot’s eyes can shoot out to a distance of more than 10 centimetres! This well-known ‘eyes out on stalks’ cartoon technique, which deliberately over-exaggerates how people’s eyes widen and stare when they are startled, is something we instinctively understand even though our eyes don’t really do this. It makes use of the fact that cartoons take the real world to extremes, and audiences understand and are entertained by this sort of comical exaggeration. In fact it’s been shown that people are faster at recognising cartoons of people than recognising the un- exaggerated original.

High tech head builder
The mechanical internals of EMYS consist of lightweight aluminium, while the covering external elements, such as the eyes and discs, are made of lightweight plastic using 3D rapid prototyping technology. This technology allows a design on the computer to be ‘printed’ in plastic in three dimensions. The design in the computer is first converted into a stack of thin slices. Each slice of the design, from the bottom up, individually oozes out of a printer and on to the slice underneath, so layer-by-layer the design in the computer becomes a plastic reality, ready for use.

Facing the future
A ‘gesture generator’ computer program controls the way the head behaves. Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simple commands to the high-speed motors, moving the various lightweight parts of the face. In this way EMYS can behave in an amazingly fluid way – its eyes can ‘blink’, its neck can turn to follow a person’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social robot FLASH (FLexible Autonomous Social Helper) and also with other robot bodies as part of the LIREC project (www.lirec.eu [archived]). This big project explores the question of how robot companions could interact with humans, and helps find ways for robots to usefully show their ‘emotions’.

Do try this at home
In this issue, there is a chance for you to program an EMYS-like robot. Follow the instructions on the Emotion Machine in the centre of the magazine (see printable version below) and build your own EMYS. By selecting a series of different commands in the Emotion Engine boxes, the expression on EMYS’s face will change. How many different expressions can you create? What are the instructions you need to send to the face for a particular expression? What emotion do you think that expression looks like – how would you name it? What would you expect the robot to be ‘feeling’ if it pulled that face?

Print, cut out and make your own emotional robot. The strips of paper at the top (‘sliders’) containing the expressions and letters are slotted into the grooves on the robot’s face and happy or annoyed faces can created by moving the sliders.

Go further
Why not draw your own sliders, with different eye shapes, mouth shapes and so on. Explore and experiment! That’s what computer scientists do.

*****************************

This article was originally published on CS4FN (Computer Science For Fun) and on page 7 of issue 13 of the CS4FN magazine. You can download a free PDF copy of that issue, as well as all of our other free magazines and booklets.

 

Braille: binary, bits & bytes – Letters from the Victorian Smog

Letters from the Victorian Smog
by Paul Curzon, Queen Mary University of London

Reading Braille image by Myriams-Fotos from Pixabay

We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information.That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.

Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.

Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.

The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.

He had invented the first modern form of writing based on binary.

Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.

Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.

The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.

**********************************

Further reading

This post was first published on CS4FN and also appears on page 7 of Issue 20 of the CS4FN magazine. You can download a free PDF copy of the magazine here as well as all of our previous magazines and booklets, at our free downloads site.

The RNIB has guidance for sighted people who might be producing Braille texts for blind people, about how to use Braille on a computer and get it ready for correct printing.

This History of Braille article also references an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk). Barbier’s system inspired Braille to create his.

A different way of representing letters is Morse Code which is a series of audible short and long sounds that was used to communicate messages very rapidly via telegraphy.

Find out about Abraham Louis Breguet’s ‘Tactful Watch‘ that let people work out what time it was by feel, instead of rudely looking at their watch while in company.

 

Only the fittest slogans survive!

by Paul Curzon, Queen Mary University of London

Assembly line image by OpenClipart-Vectors from Pixabay

 

Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!

A mastery of language
To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.

First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.

From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.

Breeding Slogans
New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.

Culling Slogans
Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.

Many generations later…
The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.

Hot chocolate image by Sabrina Ripke from Pixabay

More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.

 


 

This article was originally published on CS4FN and also appears on p13 of Creative Computing, issue 22 of the CS4FN magazine. You can download a free PDF copy of the magazine as well as all of our previous booklets and magazines from the CS4FN downloads site.

 

 

What are birds actually saying?

by Dan Stowell and Paul Curzon, Queen Mary University of London

Birds make so much noise, and it’s very complex. Is it just babble, or are they saying complicated things to each other? If so, could we work out what they are saying, what it means? Could we learn their language and speak to the birds?

Gull image by Thanasis Papazacharias from Pixabay

We know that bird communication is not as complicated as the words and sentences in human speech. So far, no one has been able to find grammatical patterns like those we find in human language. There apparently aren’t rules for birds like the ones we have about verbs and nouns. Birds don’t have to learn grammar! Exactly how complex bird languages are is still hotly debated, though.

Sometimes they’re passing on information about predators, or food, or sometimes just advertising their own fitness – showing off to get a mate (a bit like karaoke nights). Scientists have proved that such specific kinds of information are in the sounds birds make by observing bird behaviour. By playing recordings of birds and seeing how other birds react, they can see what information was communicated by a particular sound. If you play a ‘predator near’ call, for example, then other birds flee, but they stay put if you play other calls. They get the message.

Birds are definitely passing on specific information when they sing.

It turns out some birds have even learnt the languages of other animals and use it both to help those other animals and to support a life of crime. Many animals listen for the alarm calls of the animals around them, and so flee when others see a problem. Birds called Drongos, for example, act as lookouts for Meerkats, giving warning calls when they see Meerkat predators, allowing them to return to the safety of their burrows. However, the Drongos also sound false alarms every so often. They do it when they see a Meerkat with some juicy morcel. As the Meerkats run, the Drongo swoops in to steal the abandoned food.

Unfortunately for the Drongo, Meerkats are quite clever and get wise to the con. Eventually, they start to ignore the Drongo and only listen for their own Meerkat sentry’s call. The Drongo has another trick though. They are really good at mimicking sounds they hear, just like parrots. They have learnt to speak Meerkat just like the scientists do in experiments. So when the Meerkats stop reacting, the Drongos just switch tactics and start making perfect Meerkat language alarm calls instead. Once again the food is theirs.

Drongos give false alarms so they can steal food.

While most of us can’t reproduce bird sounds ourselves, and so talk directly to animals, we can certainly write programs to do it. In Star Wars, C3PO is a master of languages, speaking millions. Real robots of the near future will be able to mimic the sounds of whatever animals they wish and communicate with them in at least the simple ways that animals of different species listen and talk to each other. Perhaps something like this might be used to help protect endangered species from their predators, for example, watching for hawks and issuing timely warnings. We just have to hope they don’t turn to the Dark Side, like the Drongos, and use these skills to support a life of crime.

 


This article was originally published on CS4FN and in issue 21 of the CS4FN magazine ‘Computing Sounds Wild’ on p3. You can download a PDF copy of Issue 21, as well as all of our previous published material, free, at the CS4FN downloads site.

Computing Sounds Wild explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.

Front cover of CS4FN Issue 21 – Computing sounds wild

 

 

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).

 

The red sock of doom – trying to catch mistakes before they happen ^JB

Washing machine mistake

A red sock in with your white clothes wash – guess what happened next? What can you do to prevent it from happening again? Why should a computer scientist care? It turns out that red socks have something to teach us about medical gadgets.

How can we stop red socks from ever turning our clothes pink again? We need a strategy. Here are some possibilities.

  • Don’t wear red socks.
  • Take a ‘how to wash your clothes’ course.
  • Never make mistakes.
  • Get used to pink clothes.

Let’s look at them in turn – will they work?

Don’t wear red socks: That might help but it’s not much use if you like red socks or if you need them to match your outfit. And how would it help when you wear purple, blue or green socks? Perhaps your clothes will just turn green instead.

Take a ‘how to wash your clothes’ course: Training might help: you’d certainly learn that a red sock and white clothes shouldn’t be mixed, you probably did know that anyway, though. It won’t stop you making a similar mistake again.

Never make misteaks: Just never leave a red sock in your white wash. If only! Unfortunately everyone makes mistakes – that’s why we have erasers on pencils and a delete key on computers – this idea just won’t work.

Get used to pink clothes: Maybe, but it’s not ideal. It might not be so great turning up to school in a pink shirt.

What if the problem’s more serious?

We can probably live with pink clothes, but what happens if a similar mistake is made at a hospital? Not socks, but medicines. We know everyone makes mistakes so how do we stop those mistakes from harming patients? Special machines are used in hospitals to pump medicine directly into a patient’s arm, for example, and a nurse needs to tell it how much medicine to give – if the dose is wrong the patient won’t get better, and might even get worse.

What have we learned from our red sock strategies? We can’t stop giving patients medicine and we don’t want to get used to mistakes so our first and fourth strategies won’t work. We can give nurses more training but everyone makes mistakes even when trained, so the third suggestion isn’t good enough either and it doesn’t stop someone else making the same mistake.

We need to stop thinking of mistakes as a problem that people make and instead as a problem that systems thinking can solve. That way we can find solutions that work for everyone. One possibility is to check whether changes to the device might make mistakes less likely in the first place.

Errors? Or arrows?

Most medical machines are controlled with a panel with numbered keys (a number keypad) like on mobile phones, or up and down arrows (an arrow keypad) like you sometimes get on alarm clocks. CHI+MED researchers have been asking questions like: which way is best for entering numbers quickly, but also which is best for entering numbers accurately? They’ve been running experiments where people use different keypads, are timed and their mistakes are recorded. The researchers also track where people are looking while they use the keypads. Another approach has been to create mathematical descriptions of the different keypads and then mathematically explore how bad different errors might be.

It turns out that if you can see the numbers on a keypad in front of you it’s very easy to type them in quickly, though not always correctly! You need to check the display to see if you have actually put in the right ones. Worse, mistakes that are made are often massive – ten times too much or more. The arrow keypads are a little slower to use but because people are already looking at the display (to see what numbers are appearing) they can help nurses be more accurate, not only are fewer mistakes made but those that are made tend to be smaller.

Smart machines help users

A medical device that actively helps users avoid mistakes helps everyone using it (and the patients it’s being used on!). Changing the interface to reduce errors isn’t the only solution though. Modern machines have ‘intelligent drug libraries’ that contain information about the medicines and what sort of doses are likely and safe. Someone might still mistakenly tell the machine to give too high a dose but now it can catch the error and ask the nurse to double-check. That’s like having a washing machine that can spot bright socks in a white wash and that refuses to switch on till it has been removed.

Building machines with a better ability to catch errors (remember, we all make mistakes) and helping users to recover from them easily is much more reliable than trying to get rid of all possible errors by training people. It’s not about avoiding red socks, or errors, but about putting better systems in place to make sure that we find them before we press that big ‘Start’ button.

This story was originally published here and is an article from CS4FN, a free computer science magazine from Queen Mary University of London which is sent to subscribing UK schools. To find out more please visit our About page.

Further reading / watching
You can find a copy of this article on pages 4 and 5 in issue 17 (Machines Making Medicine Safer) of CS4FN 17.

From 50s in this Paddington 2 clip you can see a ‘real world’ example of a red sock getting into the laundry.