Imagine swallowing a slug (hint not only a yucky thought but also not a good idea as it could kill you)…now imagine swallowing a slug-bot … also yucky but in the future it might save your life.
When people accidentally swallow solid objects that won’t pass through their digestive system, or are toxic, it can be a big problem. Once an object passes beyond your stomach it becomes hard to get at.
That is where the slug shaped robot comes in (watch the video below). The idea of scientists at the Chinese University of Hong Kong is that a robot like a slug could crawl down your throat to retrieve whatever you had swallowed.
If you think of robots as solid, hard things then that would be the last thing you might want to swallow (aside from an actual slug), and certainly not to catch the previous solid thing you swallowed. You may be right. However, that is where the soft slug-shaped robot comes in.
It is easy to make or buy slime-like “silly” putty. Add iron filings to slime putty and you can make it stretch and sway and even move around with magnets yourself. You can buy such magnetic slime at science museums…it is fun to play with though you definitely shouldn’t swallow it.
The scientists have taken that general idea though and using special materials created a similar highly controllable bot that can be moved around using a magnet-based control system. It is made of a special material that is magnetic and slime-like but coated in silicon dioxide to stop it being poisonous.
They have shown that they can control it to squeeze through narrow gaps and encircle small objects, carrying them away with it…essentially what would be needed to recover objects that have been swallowed.
It needs a lot more work to make sure it is safe to really be swallowed. Also to be a real autonomous robot it would need to have sensors included somehow, and be connected to some sort of intelligent system to automatically control its behaviour. However, with more research that all may become possible.
So in the future if you don’t fancy swallowing a slug-bot, you’d better be far more careful about what else you swallow first. Of course, if it turns out slug like robots can break down, so get stuck themselves, you may then be in a position of needing to swallow a bird-bot to catch the slug-bot. How absurd …
Computer scientist Jason Cordes tells us what it was like to work for NASA on the International Space Station during the time of Space Shuttle launches. (From the archive)
Working for a space agency is brilliant. When I was younger, I often looked up at the stars and wondered what was out there. I visited Johnson Space Center in Houston, Texas and told myself that I wanted to work there someday. After completing my college degree in computer science, I had the great fortune to be asked to work at NASA’s Johnson Space Center as well as Kennedy Space Center.
Johnson Space Center is the home of the Mission Control Center (MCC). This is where NASA engineers direct in-orbit flights and track the position of the International Space Station (ISS) and the Space Shuttle when it is in orbit. Kennedy Space Center, situated at Cape Canaveral, Florida, is where the Space Shuttle and most other space-bound vehicles are launched. Once they achieve orbit, control is handed over to Johnson Space Center in Houston, which is why when you hear astronauts calling Earth, they talk to “Houston”.
Space City
Houston is a very busy city and you get that feeling when you are at Johnson. There are people everywhere and the Space Center looks like a small city unto itself. While I was there I worked on the computer control system for the International Space Station. The part I worked on was a series of laptop-based displays designed to give astronauts on the station a real-time view of the state of everything, from oxygen levels to the location of the robotic arm.
The interesting thing about developing this type of software is realising that the program is basically sending and receiving telemetry (essentially a long list of numbers) to the hardware, where the hardware is the space station itself. Once you think of it like that, the sheer simplicity of what is being done is really surprising. I certainly expected something more complex. All of the telemetry comes in over a wire and the software has to keep track of what telemetry belongs to what component since different components all broadcast over the same wire. Essentially the program routes the data based on what component it comes from and acts as an interpreter that takes the numbers that the space station is feeding and converting them into a graphical format that the astronauts can understand. The coolest part of working in Houston was interacting with astronauts and getting their feedback on how the software should work. It’s like working with celebrities.
Wild times
While at Kennedy Space Center, I was tasked with working on the Shuttle Launch Control System for the next generation of shuttles. The software is very similar to that used to control the ISS. The thing I remember most about working there was the environment.
Kennedy Space Center is about as opposite as you can get from the big city feeling at Johnson. It’s situated on what is essentially swampland on the eastern coast of Florida. The main gates to Johnson are right on major streets within Houston, but at Kennedy the gate is on a major highway, and even then, travel to the actual buildings of the Space Center is a leisurely 30 minute drive through orange groves and trees as well as bypassing causeways and creeks. Along the way you might spot an eagle’s nest in one of the trees, or a manatee poking its head from the waters. Kennedy is in the middle of a wildlife preserve with alligators, manatees, raccoons and every other kind of critter you can imagine. In fact, I was prevented from going home one evening by a gator that decided to warm itself up by my car.
The coolest thing about working at NASA, and specifically Kennedy Space Center, was being able to watch shuttle launches from less than 10 miles away. It’s an incredible experience. The thundering engines vibrate throughout your body like being next to the speakers at an entirely too loud rock concert. Night launches were the most amazing, with the fire from the engines lighting up the sky. It is very amazing to watch this machine and realise that you are the one who wrote the computer program that set it in motion. I’ve worked in a few development firms, but few of them gave me as much emotion when I saw it in action as this did.
A replica of Beagle 2 in the Science Museum with solar panels deployed. Image by user:geni from Wikimedia CC BY-SA 4.0
A reason the Apollo Moon landings were manned was in-part because the astronauts were there to deal with things if they went wrong: landing on a planet or moon’s surface is perfectly possible to do automatically as long as things go to plan. It is when something unexpected happens that is always going to be the tricky bit.
Beagle 2 is a good example. It was a British-built space probe that was sent to explore Mars in 2003. Named after biologist Charles Darwin’s famous ship, Beagle 2, sadly it never made it. It was due to land on Christmas Day that year, but something went wrong and it vanished without a trace. Beagle 2’s disappearance was perhaps the inspiration behind the Guinevere One space probe in the 2005 Doctor Who episode ‘The Christmas Invasion’, but Beagle 2 was unlikely to have been stolen by the Sycorax.
Had Beagle 2 made it, the first thing we would have heard was its radio call sign, which was some digital music specially composed by Britpop group, Blur. It wasn’t the only part of the ill-fated Beagle 2 mission that had an artistic twist. Famous British artist Damien Hirst (the man who had previously pickled halved calves in formaldehyde tanks), had designed one of his famous spot paintings – rows of differently coloured spots – that was to be used as an instrument calibration chart. It would have been the first art on Mars, but it, instead, appeared to have become the first art all over Mars! However, if you shoot for the stars you have to expect things to fail sometimes. You learn and try again.
There was a twist to the story too, as eleven years later in 2015, the Beagle 2 was spotted by NASA’s Mars Reconnaissance Orbiter. Using sophisticated image reconstruction programs working with a series of different images, a picture of it was created that allowed the scientists to work out some of what had happened. It had landed successfully on Mars, but apparently its solar panels had then failed to fully open. One appeared to be blocking its communications antenna meaning it had no way to talk to Earth, and no way to repair itself either. It may well have collected data, but just couldn’t tell us about it (or play us some Blur). The data it collected (if it did) may be there, though, waiting for the day when it can be passed back to Earth.
While it may not have succeeded in helping us find out more about Mars, Beagle 2 has presumably become the first Martian Art Gallery, though, displaying the one and only work of art on the planet: a spot picture by Damien Hirst.
Peter W McOwan and Paul Curzon, Queen Mary University of London
The Apollo lunar modules that landed on the moon were guided by a complex mixture of computer program control and human control. Neil Armstrong and the other astronauts essentially operated an semi-automatic autopilot, switching on and off pre-programmed routines. One of the many problems the astronauts had to deal with was that the engines had to be shut down before the craft actually landed. Too soon and they would land too heavily with a crunch, too late and they could kick up the surface and the dust might cause the lunar module to explode. But how to know when?
They had ground sensing radar but would it be accurate enough? They needed to know when they were only feet above the surface. The solution was a cunning contraption: essentially a sensor button on the end of a long stick. These sensors dangled below each foot of the lunar module (see image). When they touched the surface the button pressed in, a light came on in the control panel and the astronaut knew to switch the engines off. Essentially, this sensor is the same as an epee: a fencing sword. In a fencing match the sword registers a hit on the opponent when the button on its tip is pressed against their body. Via a wire running down the sword and out behind the fencer, that switches on a light on the score board telling the referee who made the hit. So the Lunar Module effectively had a fencing bout with the moon…and won.
Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.
Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.
Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.
Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.
But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:
“There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”
the difficulty of getting things right:
“To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”
and the challenge of making things go faster:
“One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”
She explains how computing is about patterns:
“it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.
and inventing new ideas
“We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.
and being creative. If we knew the laws for composing music:
“the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
Alan Turing famously asked if a machine can think – Ada Lovelace got there first:
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
Wow, pretty amazing, for someone born 200 years ago.
Ursula Martin, University of Oxford(From the archive)
Charles Dickens is famous for his novels highlighting Victorian social injustice. Despite what people say, art and science really do mix, and Dickens certainly knew some computer science. In his classic novel about the French Revolution, A Tale of Two Cities, one of his characters relies on some computer science based knitting.
Dickens actually moved in the same social circles as Charles Babbage, the Victorian inventor of the first computer (which he designed but unfortunately never managed to build) and Ada Lovelace the mathematician who worked with him on those first computers. They went to the same dinner parties and Dickens will have seen Babbage demonstrate his prototype machines. An engineer in Dickens novel, Little Dorrit, is even believed to be partly based on Babbage. Dickens was probably the last non-family member to visit Ada before she died. She asked him to read to her, choosing a passage from his book Dombey and Son in which the son, Paul Dombey, dies. Like Ada, Paul Dombey had suffered from illness all his life.
So Charles Dickens had lots of opportunity to learn about algorithms! His novel ‘A Tale of Two Cities’ is all about the French Revolution, but lurking in the shadows is some computer science. One of the characters, a revolutionary called Madame Defarge takes the responsibility of keeping a register of all those people who are to be executed once the revolution comes to pass: the aristocrats and “enemies of the people”. Of course in the actual French Revolution lots of aristocrats were guillotined precisely for being enemies of the new state.
Now Madame Defarge could have just tried to memorize the names on her ‘register’ as she supposedly has a great memory, but the revolutionaries wanted a physical record. That raises the problem, though, of how to keep it secret, and that is where the computer science comes in. Madame Defarge knits all the time and so she decides to store the names in her knitting.
“Knitted, in her own stitches and her own symbols, it will always be as plain to her as the sun. Confide in Madame Defarge. It would be easier for the weakest poltroon that lives, to erase himself from existence, than to erase one letter of his name or crimes from the knitted register of Madame Defarge.”
Computer scientists call this Steganography: hiding information or messages in plain sight, so that no one suspects they are there at all. Modern forms of steganography include hiding messages in the digital representation of pictures and in the silences of a Skype conversation.
Madame Defarge didn’t of course just knit French words in the pattern like a victorian scarf version of a T-shirt message. It wouldn’t have been very secret if anyone looking at the resulting scarf could read the names. So how to do it? In fact, knitting has been used as a form of steganography for real. One way was for a person to take a ball of wool and mark messages down it in Morse Code dots and dashes. The wool was then knitted into a jumper or scarf. The message is hidden! To read it you unpick it all and read the morse code back off the wool.
The names were “Knitted, in her own stitches and her own symbols”
That wouldn’t have worked for Madame Defarge though. She wanted to add the names to the register in plain view of the person as they watched and without them knowing what she was doing. She therefore needed the knitting patterns themselves to hold the code. It was possible because she was both a fast knitter and sat knitting constantly so it raised no suspicion. The names were therefore, as Dickens writes “Knitted, in her own stitches and her own symbols”
She used a ‘cipher’ and that brings in another area of computer science: encryption. A cipher is just an algorithm – a set of rules to follow – that converts symbols in one alphabet (letters) into different symbols. In Madame Defarge’s case the new symbols were not written but knitted sequences of stitches. Only if you know the algorithm, and a secret ‘key’ that was used in the encryption, can you convert the knitted sequences back into the original message.
In fact both steganography and encryption date back thousands of years (computer science predates computers!), though Charles Dickens may have been the first to use knitting to do it in a novel. The Ancient Greeks used steganography. In the most famous case a message was written on a slave’s shaved head. They then let the hair grow back. The Romans knew about cryptographic algorithms too and one of the most famous ciphers is called the Caesar cipher as Julius Caesar used it when writing letters…even in Roman times people were worried about the spies reading their equivalent of emails.
Dickens didn’t actually describe the code that Madame Defarge was using so we can only guess…but why not see that as an opportunity and (if you can knit) why not invent a way yourself. If you can’t knit then learn to knit first and then invent one! Somehow you need a series of stitches to represent each letter of the alphabet. In doing so you are doing algorithmic thinking with knitting. You are knitting your way to being a computer scientist.
Paul Curzon, Queen Mary University of London (From the archive)
Avengers: Age of Ultron is the latest film about robots or artificial intelligences (AI) trying to take over the world. AI is becoming ever present in our lives, at least in the form of software tools that demonstrate elements of human-like intelligence. AI in our mobile phones apply and adapt their rules to learn to serve us better, for example. But fears of AI’s potential negative impact on humanity remain as seen in its projection into characters like Ultron, a super-intelligence accidentally created by the Avengers.
But what relation do the evil AIs of the movies have to scientific reality? Could an AI take over the world? How would it do it? And why would it want to? AI movie villains need to consider the whodunit staples of motive and opportunity.
Motive? What motive?
Let’s look at the motive. Few would say Intelligence in itself unswervingly leads to a desire to rule the world. In movies AI are often driven by self preservation, a realisation that fearful humans might shut them down. But would we give our AI tools cause to feel threatened? They provide benefits for us and there also seems little reason in creating a sense of self-awareness in a system that searches the web for the nearest Italian restaurant, for example.
Another popular motive for AIs’ evilness is their zealous application of logic. In Ultron’s case the goal of protecting the earth can only be accomplished by wiping out humanity. This destruction by logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is right twice a day whereas the slow one is never right. Ultron’s plot motivation, based on brittle logic combined with indifference to life, seems at odds with todays AI systems that reason mathematically with uncertainty and are built to work safely with users.
Opportunity Knocks
When we consider an AI’s opportunity to rule the world we are on somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular skill – the ability to conduct a believable conversation. The premise being that if you can’t tell the difference between AI and human skill, the AI has passed the test and should be considered as intelligent as humans.
So what would a Turing Test for the ‘skill’ of world domination look like? To explore that we need to compare the antisocial AI behaviours with the attributes expected of human world domination. World dominators need to control important parts of our lives, say our access to money or our ability to buy a house. AI does that already – lending decisions are frequently made by an AI sifting through mountains of information to decide your credit worthiness. AIs now trade on the stock market too.
An overlord would give orders and expect them to be followed. Anyone who has stood helplessly at a shop’s self-service till as it makes repeated bagging related demands of them already knows what it feels like to be bossed about by AIs.
Kill Bill?
Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today military robots can identify targets without human intervention. It is currently a human controller that gives permission to attack but it’s not a stretch to say that the potential to auto kill exists in these AIs, but we would need to change the computer code to allow it.
These examples arguably show AI in control in limited but significant parts of life on earth, but to truly dominate the world, movie style, these individual AIs would need to start working together to create a synchronised AI army – that bossy self-service till talking to your health monitor and denying selling you beer, then both ganging up with a credit scoring system to only raise your credit limit if you both buy a pair of trainers with a built in GPS tracker and only eat the kale from your smart fridge but after the shoe data shows you completed the required five mile run.
It’s a worrying picture but fortunately I think it’s an unlikely one. Engineers worldwide are developing the Internet of things, networks connecting all manner of devices together to create new services. These are pieces of a jigsaw that would need to join together and form a big picture for total world domination. It’s an unlikely situation – too much has too fall into place and work together. It’s a lot like the infamous plot-hole in Independence Day – where an Apple Mac and an alien spaceship’s software inexplicably have cross-platform compatibility. [See video below for a possible answer!]
Our earthly AI systems are written in a range of computer languages, hold different data in different ways and use different and non-compatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why adding two safely designed AI systems, developed by separate companies for separate services would spontaneously blend to share capabilities and form some greater common goal without human intervention.
So could AIs, and the robot bodies containing them, pass the test and take over the world? Only if we humans let them, and help them a lot. Why would we?
The beautiful (and quite possibly wi-fi ready, with those antennas) Victoria Crowned Pigeon. Not a carrier pigeon admittedly, but much more photogenic. Image by Foto-Rabe from Pixabay
Happy April Fool’s Day everyone, here are a couple of examples of programmers having a little fun.
Winged messengers
In 1990 a joke memo was published for April Fool’s Day which suggested the use of homing pigeons as a form of internet, in which the birds might carry small packets of data. The memo, called ‘IP over Avian Carriers’ (that is, a bird-based internet), was written in a mock-serious tone (you can read it here) but although it was written for fun the idea has actually been used in real life too. Photographers in remote areas with minimal internet signal have used homing pigeons to send their pictures back.
A company in the US which offers adventure holidays including rafting used homing pigeons to return rolls of films (before digital film took over) back to the company’s base. The guides and their guests would take loads of photos while having fun rafting on the river and the birds would speed the photos back to the base, where they could be developed, so that when the adventurous guests arrived later their photos were ready for them.
You might also enjoy this attempt to make broadband work over wet string instead of the more usual wires. They actually managed it! Broadband over ‘wet string’ tested for fun (13 December 2017)
Serious fun with pigeons
On April Fool’s Day in 2002 Google ‘admitted’ to its users that the reason their web search results appeared so quickly and were so accurate was because, rather than using automated processes to grab the best result, Google was actually using a bank of pigeons to select the best results. Millions of pigeons viewing web pages and pecking picking the best one for you when you type in your search question. Pretty unlikely, right?
In a rather surprising non-April Fool twist some researchers decided to test out how well pigeons can distinguish different types of information in hospital photographs. They trained pigeons by getting them to view medical pictures of tissue samples taken from healthy people as well as pictures taken from people who were ill. The pigeons had to peck one of two coloured buttons and in doing so learned which pictures were of healthy tissue and which were diseased. If they pecked the correct button they got an extra food reward.
The researchers then tested the pigeons with a fresh set of pictures, to see if they could apply their learning to pictures they’d not seen before. Incredibly the pigeons were pretty good at separating the pictures into healthy and unhealthy, with an 80 per cent hit rate.
Theatre producers, radio directors and film-makers have been trying to create realistic versions of natural sounds for years. Special effects teams break frozen celery stalks to mimic breaking bones, smack coconut shells on hard packed sand to hear horses gallop, rustle cellophane for crackling fire. Famously, in the first Star Wars movie the Wookie sounds are each made up of up to six animal clips combined, including a walrus! Sometimes the special effect people even record the real thing and play it at the right time! (Not a good idea for the breaking bones though!) The person using props to create sounds for radio and film is called a Foley artist, named after the work of Jack Donovan Foley in the 1920’s. Now the Foley artist is drawing on digital technology to get the job done.
Designing sounds
Sound designers have a hard job finding the right sounds. So how about creating sound automatically using algorithms? Synthetic sound! Research into sound creation is a hot topic, not just for special effects but also to help understand how people hear and for use in many other sound based systems. We can create simple sounds fairly easily using musical instruments and synthesisers, but creating sounds from nature, animal sounds and speech is much more complicated.
The approaches used to recognize sounds can be the basis of generating sounds too. You can either try and hand craft a set of rules that describe what makes the sound sound the way it does, or you can write algorithms that work it out for themselves.
Paying patterns attention
One method, developed as a way to automatically generate synthetic sound, is based on looking for patterns in the sounds. Computer scientists often create mathematical models to better understand things, as well as to recognize and generate computer versions of them. The idea is to look at (or here listen to) lots of examples of the thing being studied. As patterns become obvious they also start to identify elements that don’t have much impact. Those features are ignored so the focus stays on the most important parts. In doing this they build up a general model, or view, that describes all possible examples. This skill of ignoring unimportant detail is called abstraction, and if you create a general view, a model of something, this is called generalisation: both important parts of computational thinking. The result is a hand-crafted model for generating that sound.
That’s pretty difficult to do though, so instead computer scientists write algorithms to do it for them. Now, rather than a person trying to work out what is, or is not important, training algorithms work it out using statistical rules. The more data they see, the stronger the pattern that emerges, which is why these approaches are often referred to as ‘Big Data’. They rely on number crunching vast data sets. The learnt pattern is then matched against new data, looking for examples, or as the basis of creating new examples that match the pattern.
The rain in train(ing)
Number crunching based on Big Data isn’t the only way though, sometimes general patterns can be identified from knowledge of the thing being investigated. For example, rain isn’t one sound but is made up of lots of rain drops all doing a similar thing. Natural sounds often have that kind of property. So knowledge of a phenomenon can be used to create a basic model to build a generator around. This is an approach Richard Turner, now at Cambridge University, has pioneered, analysing the statistical properties of natural sounds. By creating a basic model and then gradually tweaking it to match the sound-quality of lots of different natural sounds, his algorithms can learn what natural sounds are like in general. Then, given a specific natural ‘training’ sound, it can generate synthetic versions of that sound by choosing settings that match its features. You could give it a recorded sample of real rain, for example. Then his sound processing algorithms apply a bunch of maths that pull out the important features of that particular sound based on the statistical models. With the critical features identified, and plugged in to his general model, a new sound of any length can then be generated that still matches the statistical pattern of, and so sounds like, the original. Using the model you can create lots of different versions of rain, that all still sound like rain, lots of different campfires, lots of different streams, and so-on.
For now, the celery stalks are still in use, as are the walrus clippings, but it may not be long before film studios completely replace their Foley bag of tricks with computerised solutions like Richard’s. One wookie for 3 minutes and a dawn chorus for 5 please.
Become a Foley Artist with Sonic Pi
You can have a go at being a Foley artist yourself. Sonic Pi is a free live-coding synth for music creation that is both powerful enough for professional musicians, but intended to get beginners into live coding: combining programming with composing to make live music.
It was designed for use with a Raspberry Pi computer, which is a cheap way to get started, though works with other computers too. Its also a great, fun way to start to learn to program.
Play with anything, and everything, you find around the house, junk or otherwise. See what sounds it makes. Record it, and then see what it makes you think of out of context. Build up your own library of sounds, labelling them with things they sound like. Take clips of films, mute the sound and create your own soundscape for them. Store the sound clips and then manipulate them in Sonic Pi, and see if you can use them as the basis of different sounds.
Listen to the example sound clips made with Sonic Pi on their website, then start adapting them to create your own sounds, your own music. What is the most ‘natural sound’ you can find or create using Sonic Pi?
Jane Waite and Paul Curzon, Queen Mary University of London.
explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
This blog is funded by EPSRC on research agreement EP/W033615/1.
Understanding protein folding to tackle diseases, and how computers (and people) can help
HIV-1 protease – an illustrationshowing the folded shape of a protein used by HIV, created by ‘Boghog’ in 2008, via Wikimedia. Public Domain,
Biologists want you to play games in the name of science. A group of researchers at the University of Washington have invented a computer game, Foldit, in which you have to pack what looks like a 3D nest of noodles and elastics into the smallest possible space. You drag, turn and squeeze the noodles until they’re packed in tight. You compete against others, and as you get better you can rise through the ranks of competitors around the world. How can that help science? It’s because the big 3D jumbles represent models of proteins, and figuring out how proteins fold themselves up is one of the biggest problems in biology. Knowing more about how they do it could help researchers design cures for some of the world’s deadliest diseases.
The perfect fit
Proteins are in every cell in your body. They help you digest your food, send signals through your brain, and fight infection. They’re made of small molecules called amino acids. It’s easy for scientists to figure out what amino acids go together to make up a protein, but it’s incredibly difficult to figure out the shape they make when they do it. That’s a shame, because the shape of a protein is what makes it able to do its job. Proteins act by binding on to other molecules – for example, a protein called haemoglobin carries oxygen around our blood. The shape of the haemoglobin molecule has to fit the shape of the oxygen molecule like a lock and key. The close tie between form and function means that if you could figure out the shape that a particular protein folds into, you would know a lot about the jobs it can do.
Completely complex
Tantrix rotation puzzle Image by CS4FN.
Protein folding is part of a group of problems that are an old nemesis of computer scientists. It’s what’s known as an NP-complete problem. That’s a mathematical term that means it appears there’s no shortcut to calculating the answer to a problem. You just have to try every different possible answer before you arrive at the right one. There are other problems like this, like the Tantrix rotation puzzle. Because a computer would have to check through every possible answer, the more complex the problem is the longer it will take. Protein folding is particularly complex – an average-sized protein contains about 100 amino acids, which means it would take a computer a billion billion billion years to figure out. So a shortcut would be nice then.
Puzzling out a cure
Obviously the proteins themselves have found a shortcut. They fold up all the time without having to have computers figure it out for them. In order to get to the bottom of how they do it, though, scientists are hoping that human beings might provide a shortcut. Humans love puzzles, and we’re awfully good at visual ones. Our good visual sense means we see patterns everywhere, and we can easily develop a ‘feel’ for how to use those patterns to solve problems. We use that sense when we play games like chess or Go. The scientists behind Foldit reckon that if it turns out that humans really are more efficient at solving protein folding problems, we can teach some of our tricks to computers.
If there were an efficient way to work out protein structure, it could be a huge boon to medicine. Diseases depend on proteins too, and lots of drugs work by targeting the business end of those proteins. HIV uses two proteins to infect people and replicate itself, so drugs disrupt the workings of those proteins. Cancer, on the other hand, damages helpful proteins. If scientists understood how proteins fold, they could design new proteins to counteract the effects of disease. So getting to the top of the tables in Foldit could hold even more glory for you than you bargained for – if your protein folding efforts help cure a dreaded disease, then, maybe it’s the Nobel Prize you’ll end up winning.
The coloured diagram of the enzyme above is a 3D representation to help people see how the protein folds. These are called ribbon diagrams and were invented by Jane S Richardson, find out more here.
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
This blog is funded by EPSRC on research agreement EP/W033615/1.