Can a computer create a taste in your mouth? Imagine scrolling down a list of flavours and then savouring your sweet choice from a digital lollipop. Not keen on that flavour, just click and choose a different one, and another and another. No calories, just the taste.
Nimesha Ranasinghe, a researcher at the National University of Singapore is developing a Tongue Mounted Digital Taste Interface, or digital lollipop. It sends tiny electrical signals to the very tip of your tongue to stimulate your taste buds and create a virtual taste!
One of UNESCO’s 2014 ’10 best innovations in the world’, the prototype doesn’t quite look like a lollipop (yet). There are two parts to this sweet sensation, the wearable tongue interface and the control system. The bit you put in your mouth, the tongue interface, has two small silver electrodes. You touch them to the tip of your tongue to get the taste hit. The control system creates a tiny electrical current and a minuscule temperature change, creating a taste as it activates your taste buds.
The prototype lollipop can create sour, salty, bitter, sweet, minty, and spicy sensations but it’s not just a bit of food fun. What if you had to avoid sweet foods or had a limited sense of taste? Perhaps the lollipop can help people with food addictions, just like the e-cigarette has helped those trying to give up smoking? Perhaps the lollipop can help people with food addictions
But eating is more than just a flavour on your tongue, it is a multi-modal experience, you see the red of a ripe strawberry, hear the crunch of a carrot, feel sticky salt on chippy fingers, smell the Sunday roast, anticipate that satisfied snooze afterwards. How might computers simulate all that? Does it start with a digital lollipop? We will have to wait and see, hear, taste, smell, touch and feel!
Taste over the Internet
The Singapore team are exploring how to send tastes over the Internet. They have suggested rules to send ‘taste’ messages between computers, called the Taste Over Internet Protocol, including a messaging format called TasteXML They’ve also outlined the design for a mobile phone with electrodes to deliver the flavour! Sweet or salt anyone?
You pull a cloak around you and disappear! Reality or science fiction? Harry Potter’s invisibility cloak is surely Hogwarts’ magic that science can’t match. Even in Harry Potter’s world it takes powerful magic and complicated spells to make it work. Turns out even that kind of magic can be done with a combination of materials science and computer science. Professor Susumu Tachi of the University of Tokyo has developed a cloak made of thousands of tiny beads. Cameras video what is behind you and a computer system then projects the appropriate image onto the front of the cloak. The beads are made of a special material called retro-reflectrum. It is vital to give the image a natural feel – normal screens give too flat a look, losing the impression of seeing through the person. Now you see me, now you don’t at the flick of a switch.
But could an invisibility cloak, without tiny screens on it, ever be a reality? It sounds impossible especially if you understand how light behaves. It bounces off the things around us, travelling in straight lines. You see them when that reflected light eventually reaches your eyes. I can see the red toy over there because red light bounced from it to me. For it to be invisible, no light from it must reach my eyes, while at the same time light from everything else around should. How could that be possible? Akram Alomainy of Queen Mary, University of London, tells us more.
Well maybe things aren’t quite that simple…halls of mirrors, rainbows, polar bears and desert mirages all suggest some odd things can happen with light! They show that manipulating light is possible and that we may even be able to bend it in a way that alters the way things look – even humans.
Have you ever wondered how the hall of mirrors in a fun fair distorts your reflection? Some make us look short and fat while others make us tall and slim! It’s all about controlling the behaviour of light. The light rays still travel in straight lines, but the mirrors deceive the eye. The light seems to arrive from a different place to reality because the mirrors are curved, not flat, making the light bounce at odd angles.
A rainbow is an object we see that isn’t really there. They occur because white light doesn’t actually exist. It is just coloured light all mixed up. When it hits a surface it separates back into individual colours. The colour of an object you see depends on which colours pass through or get reflected, and which get absorbed. The light is white when it hits the raindrops, but then comes out as the whole spectrum of colours. They head off at slightly different angles, which is why they appear in the different rainbow positions.
What about polar bears? Did you know that they have black skins and semi-transparent hair? You see them as white because of the way the hollow hairs reflect sunlight.
So what does this have to do with invisibility? Well, it suggests that with light all is not as it seems. Perhaps we can manipulate it to do anything we want.
Now for the clincher – mirages! They show that invisibility cloaks ought to be a possibility. Light from the sun travels in a straight line through the sky. That means we see everything as it is. Except not quite. In places like deserts where the temperature is very high at noon, apparently weird things happen to the light. The difference between the temperature, and thus the difference in density between the higher air layers and the levels closer to the ground can be quite large. That temperature difference makes light coming from the sky change direction as it passes through each layer. It bends rather than just travelling in a straight line to us. It is that image of the sky that looks like the pool of water – the mirage. Our brains assume the light travelled in a straight line, so they misinterpret its location. Now, to make something invisible we just need to make light bend round it. That invisibility cloak is a possibility if we can just engineer what mirages do – bend light!
Nano-machines
That is the basic idea and it is an area of science called ‘transformation optics’ that makes it possible. The science tells us about the properties that each point of an object must have to make light waves travel in any particular way we wish through it. To make it happen engineers must then create special materials with those properties. These materials are known as metamaterials. Their properties are controlled using electromagnetism, which is where the electronic engineers come in! You can think of them as being made of vast numbers of tiny electrical machines built into big human-scale structures. Each tiny machine is able to control how light passes through it, even bending light in a way no natural material could. If the machines are small enough – ‘nanotechnology’ as small as the wavelength of light – and their properties can be controlled really precisely to match the science’s prediction, then we can make light passing through them do anything we want. For invisibility, the aim is to control those properties so the light bends as it passes through a metamaterial cloak. If the light comes out the other side of the cloak unchanged and travelling in the same direction as it entered, while avoiding objects in the middle, then those objects will be invisible.
Simple cloaking devices that work this way have already been created but they are still very limited. One of the major challenges is the range of light they can work with. At the moment it’s possible to make a cloak that bends a single colour frequency, but not all light. As Yang Hao, a professor working in this area at Queen Mary, notes: “The obstacle engineers face is the complex manufacturing techniques needed to build devices that can bend light across the whole visible light spectrum. However, with the progress being made in nanotechnologies this could become a possibility in the near future”.
Perhaps we should leave the last word to J.K. Rowling: “A suspicious object like that, it was clearly full of Dark Magic.” So while we should appreciate the significance of such an invention we should perhaps be careful about the negative consequences!
This article is a composite of an article originally published on the CS4FN website and from one published on pages 10 and 11 of the first issue of EE4FN magazine (download from the panel below). The topic is also part of The Magic of Computer Science and a new book on ‘Conjuring with Computation’ which is coming soon.
by Peter W McOwan, Queen Mary University of London
(From the archive)
The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.
A talented Scot
As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.
Family talk
The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.
A visit to London, and a talking dog
While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.
Speaking of teaching
By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.
A ‘silly’ language mistake that changed the world
At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”
Tragedy and Travel
Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.
The time for more than dots and dashes
His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.
Chance encounter
Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.
The first ’emergency’ phone call (allegedly)
On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.
The world listens
The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.
Can’t keep a good idea down
Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.
Fame and Fortune under Forty
Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.
Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.
by Howard Williams, Queen Mary University of London
(From the archive)
Can computers lend a creative hand to the production of new magic tricks? That’s a question our team, led by Peter McOwan at Queen Mary, wrestled with.
The idea that computers can help with creative endeavours like music and drawing is nothing new – turn the radio on and the song you are listening to will have been produced with the help of a computer somewhere along the way, whether it’s a synthesiser sound, or the editing of the arrangement, and some music is created purely inside software. Researchers have been toiling away for years, trying to build computer systems that actually write the music too! Some of the compositions produced in this way are surprisingly good! Inspired by this work, we decided to explore whether computers could create magic.
The project to build creative software to help produce new magic tricks started with a magical jigsaw that could be rearranged in certain ways to make objects on its surface disappear. Pretty cool, but what part did the computer play? A jigsaw is made up of different pieces, each with four sides – the number of different ways all these pieces can be put together is very large; for a human to sit down and try out all the different configurations would take many hours (perhaps thousands, if not millions!). Whizzing through lots of different combinations is something a computer is very good at. When there are simply too many different combinations for even a computer to try out exhaustively, programmers have to take a different approach.
Evolve a jigsaw
A genetic algorithm is a program that mimics the biological process of natural selection. We used one to intelligently search through all the interesting combinations that the jigsaw might be made up from. A population of jigsaws is created, and is then ‘evolved’ via a process that evaluates how good each combination is in each generation, gradually weeding out the combinations that wouldn’t make good jigsaws. At the end of the process you hope to be left with a winner; a jigsaw that matches all the criteria that you are hoping for. In this particular case, we hoped to find a jigsaw that could be built in two different ways, but each with a different number of the same object in the picture, so that you could appear to make an object disappear and reappear again as you made and remade it. The idea is based on a very old trick popularised by Sam Lloyd, but our aim was to create a new version that a human couldn’t, realistically, have come up with, without a lot of free time on their hands!
To understand what role the computer played, we need to explore the Genetic Algorithm mechanism it used to find the best combinations. How did the computer know which combinations were good or bad? This is something creative humans are great at – generating ideas, and discarding the ones they don’t like in favour of ones they do. This creative process gradually leads to new works of art, be they music, painting, or magic tricks. We tackled this problem by first running some experiments with real people to find out what kind of things would make the jigsaw seem more ‘magical’ to a spectator. We also did experiments to find out what would influence a magician performing the trick. This information was then fed into the algorithm that searched for good jigsaw combinations, giving the computer a mechanism for evaluating the jigsaws, similar to the ones a human might use when trying to design a similar trick.
More tricks
We went on to use these computational techniques to create other new tricks, including a card trick, a mind reading trick on a mobile phone, and a trick that relies on images and words to predict a spectator’s thought processes. You can find out more including downloading the jigsaw at www.Qmagicworld.wordpress.com
Is it creative, though?
There is a lot of debate about whether this kind of ‘artificial intelligence’ software, is really creative in the way humans are, or in fact creative in any way at all. After all, how would the computer know what to look out for if the researchers hadn’t configured the algorithms in specific ways? Does a computer even understand the outputs that it creates? The fact is that these systems do produce novel things though – new music, new magic tricks – and sometimes in surprising and pleasing ways, previously not thought of.
Are they creative (and even intelligent)? Or are they just automatons bound by the imaginations of their creators? What do you think?
by Patricia Charlton and Stefan Poslad, Queen Mary University of London Queen Mary University of London
The best technology helps people solve real problems. To be a creative innovator you need not only to be able to create a solution that works but also to spot a need in the first place and be able to come up with creative solutions. Over the summer a group of sixth formers on internships at Queen Mary had a go at doing this. Ultimately their aim was to build something from a programmable gadget such as a BBC micro:bit or Raspberry Pi. They therefore had to learn about the different possible gadgets they could use, how to program them and how to control the on-board sensors available. They were then given the design challenge of creating a device to solve a community problem.
Tai Kirby wanted to help visually impaired people. He knew that it’s hard for someone with poor sight to tell when a bus is arriving. In busy cities like London this problem is even worse as buses for different destinations often arrive at once. His solution was a prototype that announces when a specific bus is arriving, letting the person know which was which. He wrote it in Python and it used a Raspberry pi linked to low energy Bluetooth devices.
The fun spell
Filsan Hassan decided to find a fun way to help young kids learn to spell. She created a gadget that associated different sounds with different letters of the alphabet, turning spelling words into a fun, musical experience. It needed two micro:bits and a screen communicating with each other using a radio link. One micro:bit controlled the screen while the other ran the main program that allowed children to choose a word, play a linked game and spell the word using a scrolling alphabet program she created. A big problem was how to make sure the combination of gadgets had a stable power supply. This needed a special circuit to get enough power to the screen without frying the micro:bit and sadly we lost some micro:bits along the way: all part of the fun!
Jesus Esquivel Roman developed a small remote-controlled robot using a buggy kit. There are lots of applications for this kind of thing, from games to mine-clearing robots. The big challenge he had to overcome was how to do the navigation using a compass sensor. The problem was that the batteries and motor interfered with the calibration of the compass. He also designed a mechanism that used the accelerometer of a second micro:bit allowing the vehicle to be controlled by tilting the remote control.
Memory for patterns
Finally, Venet Kukran was interested in helping people improve their memory and thinking skills. He invented a pattern memory game using a BBC micro:bit and implemented in micropython. The game generates patterns that the player has to match and then replicate to score points. The program generates new patterns each time so every game is different. The more you play the more complex the patterns you have to remember become.
As they found you have to be very creative to be an innovator, both to come up with real issues that need a solution, but also to overcome the problems you are bound to encounter in your solutions.
This article was originally published on the CS4FN website and a copy can also be found in issue 22 of the magazine called Creative Computing. You can download that as a PDF by clicking on the picture below and you can also download all of our free material, including back issues of the CS4FN magazine and other booklets, at our downloads site: https://cs4fndownloads.wordpress.com
Could your smartphone automatically tell you what species of bird is singing outside your window? If so how?
Mobile phones contain microphones to pick up your voice. That means they should be able to pick up the sound of birds singing too, right? And maybe even decide which bird is which?
Smartphone apps exist that promise to do just this. They record a sound, analyse it, and tell you which species of bird they think it is most likely to be. But a smartphone doesn’t have the sophisticated brain that we have, evolved over millions of years to understand the world around us. A smartphone has to be programmed by someone to do everything it does. So if you had to program an app to recognise bird sounds, how would you do it? There are two very different ways computer scientists have devised to do this kind of decision making and they are used by researchers for all sorts of applications from diagnosing medical problems to recognising suspicious behaviour in CCTV images. Both ways are used by phone apps to recognise bird song that you can already buy.
If you ask a birdwatcher how to identify a blackbird’s sound, they will tell you specific rules. “It’s high-pitched, not low-pitched.” “It lasts a few seconds and then there’s a silent gap before it does it again.” “It’s twittery and complex, not just a single note.” So if we wrote down all those rules in a recipe for the machine to follow, each rule a little program that could say “Yes, I’m true for that sound”, an app combining them could decide when a sound matches all the rules and when it doesn’t.
Young blackbird in Oxfordshire, from WikipediaThe sound of a European blackbird (Turdus merula) singing merrily in Finland, from Wikipedia (song 1).
This is called an ‘expert system’ approach. One difficulty is that it can take a lot of time and effort to actually write down enough rules for enough birds: there are hundreds of bird species in the UK alone! Each would need lots of rules to be hand crafted. It also needs lots of input from bird experts to get the rules exactly right. Even then it’s not always possible for people to put into words what makes a sound special. Could you write down exactly what makes you recognise your friends’ voices, and what makes them different from everyone else’s? Probably not! However, this approach can be good because you know exactly what reasons the computer is using when it makes decisions.
This is very different from the other approach which is…
Show it lots of examples
A lot of modern systems use the idea of ‘machine learning’, which means that instead of writing rules down, we create a system that can somehow ‘learn’ what the correct answer should be. We just give it lots of different examples to learn from, telling it what each one is. Once it has seen enough examples to get it right often enough, we let it loose on things we don’t know in advance. This approach is inspired by how the brain works. We know that brains are good at learning, so why not do what they do!
One difficulty with this is that you can’t always be sure how the machine comes up with its decisions. Often the software is a ‘black box’ that gives you an answer but doesn’t tell you what justifies that answer. Is it really listening to the same aspects of the sound as we do? How would we know?
On the other hand, perhaps that’s the great thing about this approach: a computer might be able to give you the right answer without you having to tell it exactly how to do that!
It means we don’t need to write down a ‘recipe’ for every sound we want to detect. If it can learn from examples, and get the answer right when it hears new examples, isn’t that all we need?
Which way is best?
There are hundreds of bird species that you might hear in the UK alone, and many more in tropical countries. Human experts take many years to learn which sound means which bird. It’s a difficult thing to do!
So which approach should your smartphone use if you want it to help identify birds around you? You can find phone apps that use one approach or another. It’s very hard to measure exactly which approach is best, because the conditions change so much. Which one works best when there’s noisy traffic in the background? Which one works best when lots of birds sing together? Which one works best if the bird is singing in a different ‘dialect’ from the examples we used when we created the system?
One way to answer the question is to provide phone apps to people and to see which apps they find most useful. So companies and researchers are creating apps using the ways they hope will work best. The market may well then make the decision. How would you decide?
This article was originally published on the CS4FN website and can also be found on pages 10 and 11 of Issue 21 of the CS4FN magazine ‘Computing sounds wild’. You can download a free PDF copy of the magazine (below), or any of our other free material at our downloads site.
Animation isn’t a new field – artists have been creating animations for over a hundred years. While the technology used to create those animations has changed immensely during that time, modern computer generated imagery continues to employ some of the same techniques that were used to create the first animations.
The hard work of hand drawing
During the early days of animation, moving images were created by rapidly showing a sequence of still images. Each still image, referred to as a frame, was hand drawn by an artist. By making small changes in each new frame, characters were created that appeared to be walking, jumping and talking, or doing anything else that the artist could imagine.
In order for the animation to appear smooth, the frames need to be displayed quickly – typically at around 24 frames each second. This means that one minute of animation required artists to draw over 1400 frames. That means that the first feature-length animated film, a 70-minute Argentinean film called The Apostle, required over 100,000 frames to create.
Creating a 90-minute movie, the typical feature length for most animated films, took almost 130,000 hand drawn frames. Despite these daunting numbers, many feature length animated movies have been created using hand-drawn images.
Drawing with data
Today, many animations are created with the assistance of computers. Rather than simply drawing thousands of images of one character using a computer drawing program, artists can create one mathematical model to represent that character, from which all of his or her appearances in individual frames are generated. Artists manipulate the model, changing things like the position of the character’s limbs (so that the character can be made to walk, run or jump) and aspects of the character’s face (so that it can talk and express emotions). Furthermore, since the models only exist as data on a computer they aren’t confined by the physical realities that people are. As such, artists also have the flexibility to do physically impossible things such as shrinking, bending or stretching parts of a character. Remember Elastigirl, the stretchy mum in The Incredibles? All made of maths.
Once all of the mathematical models have been positioned correctly, the computer is used to generate an image of the models from a specific angle. Just like the hand-drawn frames of the past, this computer- generated image becomes one frame in the movie. Then the mathematical models representing the characters are modified slightly, and another frame is generated. This process is repeated to generate all of the frames for the movie.
The more things change
You might have noticed that, despite the use of computers, the process of generating and displaying the animation remains remarkably similar to the process used to create the first animations over 100 years ago. The animation still consists of a collection of still images. The illusion of smooth movement is still achieved by rapidly displaying a sequence of frames, where each frame in the sequence differs only slightly from the previous one.
The key difference is simply that now the images may be generated by a computer, saving artists from hand drawing over 100,000 copies of the same character. Hand-drawn animation is still alive in the films of Studio Ghibli and Disney’s recent The Princess and the Frog, but we wonder if the animators of hand-drawn features might be tempted to look over at their fellow artists who use computers and shake an envious fist. A cramped fist, too, probably.
This article was originally published on the CS4FN website and also appears on page 3 of issue 11 of the CS4FN magazine “Computer animation proudly presents…” which you can download as a free PDF along with all of our other free material at our CS4FN downloads site.
Ada Lovelace, the ‘first programmer’ thought the possibilities of computer science might cover a far wider breadth than anyone else of her time. For example, she mused that one day we might be able to create mathematical models of the human nervous system, essentially describing how electrical signals move around the body. University of Oxford’s Blanca Rodriguez is interested in matters of the heart. She’s a bioengineer creating accurate computer models of human organs.
How do you model a heart? Well you first have to create a 3D model of its structure. You start with MRI scans. They give you a series of pictures of slices through the heart. To turn that into a 3D model takes some serious computer science: image processing that works out, from the pictures, what is tissue and what isn’t. Next you do something called mesh generation. That involves breaking up the model into smaller parts. What you get is more than just a picture of the surface of the organ but an accurate model of its internal structure.
So far so good, but it’s still just the structure. The heart is a working, beating thing not just a sculpture. To understand it you need to see how it works. Blanca and her team are interested in simulating the electrical activity in the heart – how electrical pulses move through it. To do this they create models of the way individual cells propagate an electrical system. Once you have this you can combine it with the model of the heart’s structure to give one of how it works. You essentially have a lot of equations. Solving the equations gives a simulation of how electrical signals propagate from cell to cell.
The models Blanca’s team have created are based on a healthy rabbit heart. Now they have it they can simulate it working and see if it corresponds to the results from lab experiments. If it does then that suggests their understanding of how cells work together is correct. When the results don’t match, then that is still good as it gives new questions to research. It would mean something about their initial understanding was wrong, so would drive new work to fix the problem and so the models.
Once the models have been validated in this way – shown it is an accurate description of the way a rabbit’s heart works – they can use them to explore things you just can’t do with experiments – exploring what happens when changes are made to the structure of the virtual heart or how drugs change the way it works, for example. That can lead to new drugs.
They can also use it to explore how the human heart works. For example, early work has looked at the heart’s response to an electric shock. Essentially the heart reboots! That’s why when someone’s heart stops in hospital, the emergency team give it a big electric shock to get it going again. The model predicts in detail what actually happens to the heart when that is done. One of the surprising things is it suggests that how well an electric shock works depends on the particular structure of the person’s heart! That might mean treatment could be more effective if tailored for the person.
Computer modelling is changing the way science is done. It doesn’t replace experiments. Instead clinical work, modelling and experiments combine to give us a much deeper understanding of the way the world, and that includes our own hearts, work.
The charity Cardiac Risk in the Young raises awareness of cardiac electrical rhythm abnormalities and supports testing (electrocardiograms and echocardiograms) for all young people aged 14-35.
This blog is funded through EPSRC grant EP/W033615/1.
Zin Derfoufi, a Computer Science student at Queen Mary, delves into some of the dark secrets of algorithms past.
Algorithms are used throughout modern life for the benefit of mankind whether as instructions in special programs to help disabled people, computer instructions in the cars we drive or the specific steps in any calculation. The technologies that they are employed in have helped save lives and also make our world more comfortable to live it. However, beneath all this lies a deep, dark, secret history of algorithms plagued with schemes, lies and deceit.
Algorithms have played a critical role in some of History’s worst and most brutal plots even causing the downfall and rise of nations and monarchs. Ever since humans have been sent on secret missions, plotted to overthrow rulers or tried to keep the secrets of a civilisation unknown, nations and civilisations have been using encrypted messages and so have used algorithms. Such messages aim to carry sensitive information recorded in such a way that it can only make sense to the sender and recipient whilst appearing to be gibberish to anyone else. There are a whole variety of encryption methods that can be used and many people have created new ones for their own use: a risky business unless you are very good at it.
One example is the ‘Caesar Cipher’ which is named after Julius Caesar who used it to send secret messages to his generals. The algorithm was one where each letter was replaced by the third letter down in the alphabet so A became D, B became E, etc. Of course, it means that the recipient must know of the algorithm (sequence to use) to regenerate the original letters of the text otherwise it would be useless. That is why a simple algorithm of “Move on 3 places in the alphabet” was used. It is an algorithm that is easy for the general to remember. With a plain English text there are around 400,000,000,000,000,000,000,000,000 different distinct arrangements of letters that could have been used! With that many possibilities it sounds secure. As you can imagine, this would cause any ambitious codebreaker many sleepless nights and even make them go bonkers!!! It became so futile to try and break the code that people began to think such messages were divine!
But then something significant happened. In the 9th Century a Muslim, Arabic Scholar changed the face of cryptography forever. His name was Abu Yusuf Ya’qub ibn Ishaq Al-Kindi -better known to the West as Alkindous. Born in Kufa (Iraq) he went to study in the famous Dar al-Hikmah (house of wisdom) found in Baghdad- the centre for learning in its time which produced the likes of Al-Khwarzimi, the father of algebra – from whose name the word algorithm originates; the three Bana Musa Brothers; and many more scholars who have shaped the fields of engineering, mathematics, physics, medicine, astrology, philosophy and every other major field of learning in some shape or form.
Al-Kindi introduced the technique of code breaking that was later to be known as ‘frequency analysis’ in his book entitled: ‘A Manuscript on Deciphering Cryptographic Messages’. He said in his book:
“One way to solve an encrypted message, if we know its language, is to find a different plaintext of the same language long enough to fill one sheet or so, and then we count the occurrences of each letter. We call the most frequently occurring letter the ‘first’, the next most occurring one the ‘second’, the following most occurring the ‘third’, and so on, until we account for all the different letters in the plaintext sample.
“Then we look at the cipher text we want to solve and we also classify its symbols. We find the most occurring symbol and change it to the form of the ‘first’ letter of the plaintext sample, the next most common symbol is changed to the form of the ‘second’ letter, and so on, until we account for all symbols of the cryptogram we want to solve”.
So basically to decrypt a message all we have to do is find out how frequent each letter is in each (both in the sample and in the encrypted message – the original language) and match the two. Obviously common sense and a degree of judgement has to be used where letters have a similar degree of frequency. Although it was a lengthy process it certainly was the most efficient of its time and, most importantly, the most effective.
Since decryption became possible, many plots were foiled changing the course of history. An example of this was how Mary Queen of Scots, a Catholic, plotted along with loyal Catholics to overthrow her cousin Queen Elizabeth I, a Protestant, and establish a Catholic country. The details of the plots carried through encrypted messages were intercepted and decoded and on Saturday 15 October 1586 Mary was on trial for treason. Her life had depended on whether one of her letters could be decrypted or not. In the end, she was found guilty and publicly beheaded for high treason. Walsingham, Elizabeth’s spymaster, knew of Al-Kindi’s approach.
A more recent example of cryptography, cryptanalysis and espionage was its use throughout World War I to decipher messages intercepted from enemies. The British managed to decipher a message sent by Arthur Zimmermann, the then German Foreign Minister, to the Mexicans calling for an alliance between them and the Japanese to make sure America stayed out of the war, attacking them if they did interfere. Once the British showed this to the Americans, President Woodrow Wilson took his nation to war. Just imagine what the world may have been like if America hadn’t joined.
Today encryption is a major part of our lives in the form of Internet security and banking. Learn the art and science of encryption and decryption and who knows, maybe some day you might succeed in devising a new uncrackable cipher or crack an existing banking one! Either way would be a path to riches! So if you thought that algorithms were a bore … it just got a whole lot more interesting.
“The code book: the Science of secrecy from Ancient Egypt to Quantum cryptography” by Simon Singh, especially Chapter one ‘The cipher of Queen Mary of Scots’
The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.
Don’t worry, the cognitive crash dummies are coming!
Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.
If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.
Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.
There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.
Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.
To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.
GOMS in 60 seconds?
GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.
Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.
So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.