Executable Biology

Computing cancer using computational modelling

(From the archive)

Can a robot get cancer? Silly question. Our bodies are made of cells. Robots aren’t. Cells are the basic building blocks of life and come in lots of different forms from long thin nerve cells that allow us to sense the world, to round blood cells that carry oxygen around our bodies. Cancer occurs when cells go rogue and start reproducing in an uncontrolled way. A computer can’t get cancer, but you can allow virtual diseases to attack virtual cells inside a computer. Doing that may just help find cures. That is what Jasmin Fisher, who leads a research group at Microsoft Research in Cambridge, has devoted her career to.

Becoming a medic isn’t the only way to help save lives!

Computational Modelling is changing the way the sciences are done. It is the idea that you can run experiments on virtual versions of things you are investigating. A computer model is essentially just a program that simulates the phenomena of interest. For example, by writing a program that simulates the laws of Physics, you can use it to run virtual Physics experiments about the motion of the planets, say. If your virtual planets do follow the paths real planets do, then you have evidence the laws are right. If they don’t your laws (or the models) need to change. You can also make predictions such as when an eclipse will happen. If you are right it suggests the laws you coded are good descriptions of reality. If wrong, back to the drawing board.

Jasmin has been pioneering this idea with the stuff of life and death. She focusses on modelling cells and the specific ways that we think cancer attacks them. It gives a way of exploring what is going on at the level of the molecules inside cells, and so how well new medicines might, or might not, work. Experiments can be done quickly and easily on the programmed models by running simulations. That means the real experiments, taking up expensive lab time, can focus on things that are most likely to be successful. Jasmin’s work has helped researchers design more effective actual experiments because they start with a better understanding of what is going on. One of the most important questions she is studying is how cells end up becoming what they are, and how this differs between normal cells and cancer cells. Understand this and we will be much closer to understanding how to stop cancer.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

CS4FN Advent Calendar – bonus material: HexaFestiveFlexagons to make and colour in

There’s a gift on side one, and another if you turn the hexahexaflexagon over. Another 4 to find inside the flexagon. Image by Jo Brodie
Image by Jo Brodie

(Updated for 2023, now with US Letter size files in addition to A4 files)

Father Christmas has lost six of his presents inside this flexagon. The first two are easy to find but can you uncover the other four?

By folding and unfolding the flexagon, perhaps you can find them for him

Print and make your own hexahexaflexagon and help Father Christmas find the missing gifts.

  1. Print (or draw) your flexagon
  2. Cut it out, then fold it following the instructions
  3. Find all of Father Christmas’ lost gifts by pinching and folding the flexagon to reveal the hidden faces

A hexahexaflexagon is a six sided shape (hexagon) which also has six faces in total (hexa-hexa) and which can flex and fold to show a new face (flexagon). They are fun to make and play with but can also be used to learn some computational thinking. To get to each face or side you may need to follow a variety of paths, you can’t always get to every face from every other face. The sides you can reach depend on the sides you currently have visible – it’s a ‘finite state machine’ and you can create a map to describe how you navigate around your hexahexaflexagon. See our page on Computational Thinking: HexaHexaFlexagon Automata and download our free booklet (PDF) to find out more. We definitely recommend this as an end-of-term classroom activity.

Table of Contents
A. For people who want a ready-coloured hexahexaflexagon – print and go
B. For people who want to colour in their own hexahexaflexagon – print & colour in
C. For people who want to design their own hexahexaflexagon on a computer
D. For people who don’t have a printer or want to design a hexahexaflexagon from scratch
E. Useful videos

A. For people who want a ready-coloured hexahexaflexagon

Image by Jo Brodie

• UK: Print this PDF file: Father Christmas coloured hexahexaflexagon (A4)
• US: Print this PDF file: Father Christmas coloured hexahexaflexagon (US Letter)
• Read these PDF instructions: CS4FN How to fold a hexahexaflexagon – colour (UK A4) or CS4FN How to fold a hexahexaflexagon – colour (US Letter)

 

B. For people who want to colour in their own hexahexaflexagon

Image by Jo Brodie

• UK: Print this PDF file: Father Christmas black and white hexahexaflexagon to colour – this contains a guide if you want each of the six faces to have its own colour (A4)
• US: Print this PDF file: Father Christmas black and white hexahexaflexagon to colour – this contains a guide if you want each of the six faces to have its own colour (US Letter)
• Read these PDF instructions: CS4FN How to fold a hexahexaflexagon – black and white (UK A4) or CS4FN How to fold a hexahexaflexagon – black and white (US Letter)

C. For people who want to design their own hexahexaflexagon on a computer

Image by Jo Brodie

• Use this PDF file: CS4FN blank hexahexaflexagon – design your own
• Use these folding instructions: CS4FN How to fold a hexahexaflexagon – black and white
• just the triangles – .zip of .svg & .png

It may be easier to make the flexagon first then colour it in, then it’s easier to see which triangle is on which face, but the printable does have instructions in if you want to make one that will ‘work’ once folded.

D. For people who don’t have a printer or who want to create one from scratch

• Read the instructions here: – CS4FN How to create a hexahexaflexagon from scratch
• Use these folding instructions: CS4FN How to fold a hexahexaflexagon – black and white
• Useful website for calculating the height needed for an equilateral triangle (if you want to create hexahexaflexagons on different sizes of paper) https://www.omnicalculator.com/math/equilateral-triangle

E. Useful videos

Above: CS4FN’s Paul Curzon demonstrates how to fold one (note that the direction of the first round of folding is different from the written instructions above, though it doesn’t matter if you go from A to B or B to A).

Image by Jo Brodie
@jobrodie

Create a hexahexaflexagon – Merry Christmas from CS4FN

♬ original sound – jobrodie

Part of our CS4FN Christmas Computing Advent Calendar.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Meet the chatbots

Sitting down and having a nice chat with a computer because they are your friend probably isn’t something you do every day. You may never have done it. We mainly still think of it as being a dream for the future. But there is lots of work being done to make it happen in the present, and you may already be asking chatbots for help regularly. The whole idea has roots that stretch far back into the past. It’s a dream that goes back to Alan Turing, and then even a little further.

The imitation game

Back around 1950, Turing was thinking about whether computers could be intelligent. He had a problem though. Once you begin thinking about intelligence, you find it is a tricky thing to pin down. Intelligence is hard to define even in humans, never mind animals or computers. Turing started to wonder if he could ask his question about machine intelligence in a different way. He turned to a Victorian parlour game called the imitation game for inspiration.

The imitation game was played with large groups at parties, but focused on two people, a man and a woman. They would go into a different room to be asked questions by a referee. The woman had to answer truthfully. The man answered in any way he believed would convince everyone else he was really the woman. Their answers were then read out to the rest of the guests. The man won the game if he could convince everyone back in the party that he was really the woman.

Pretending to be human

Turing reckoned that he could use a similar test for intelligence in a machine. In Turing’s version of the imitation game, instead of a man trying to convince everyone he’s really a woman, a computer pretends to be a human. Everyone accepts the idea that it takes a certain basic intelligence to carry on a conversation. If a computer could carry on a conversation so well that talking to it was just like talking to a human, the computer must be intelligent.

When Turing published his imitation game idea, it helped launch the field of artificial intelligence (AI). Today, the field pulls together biologists, computer scientists and psychologists in a quest to understand and replicate intelligence. AI techniques have delivered some stunning results. People have designed computers that can beat the best human at chess, diagnose diseases, and invest in stocks more successfully than humans.

A chat with a chatterbot

But what about the dream of having a chat with a computer? That’s still alive. Turing’s idea, demonstrating computer intelligence by successfully faking human conversation, became known as the Turing test. Turing thought machines would pass his test before the 20th century was over, but the goal has proved more elusive than that. People have been making better conversational chat programs, called chatbots, since the 1960s, but no one has yet made a program that can fool everyone into thinking it’s a real human.

What’s up, Doc

On the other hand, some chatbots have done pretty well. One of the first and still one of the most famous was created in 1968. It was called ELIZA. Its trick was imitating the sort of conversation you might have with a therapist. ELIZA didn’t volunteer much knowledge itself, but tried to get the user to open up about what they were thinking. So the person might type “I don’t feel well”, and ELIZA would respond with “you say you don’t feel well?” In a normal social situation, that would be a frustrating response. But it’s a therapist’s job to get a patient to talk about themselves, so ELIZA could get away with it. For an early example of a chatbot, ELIZA did pretty well, but after a few minutes of chatting users realised that ELIZA didn’t really understand what they were saying.

Where have I heard this before?

One of the big problems in making a good chatterbot is coming up with sentences that sound realistic. That’s why ELIZA tried to keep its sentences simple and non-committal. A much more recent chatterbot called Cleverbot uses another brilliantly simple solution: it doesn’t try to make up sentences at all. It just stores all the phrases that it’s ever heard, and chooses from them when it needs to say something. When a human types a phrase to say to Cleverbot, its program looks for a time in the past when it said something similar, then reuses whatever response the human gave at the time. Given that Cleverbot has had 65 million chats on the Internet since 1997, it’s got a lot to choose from. And because its sentences were all originally entered by humans, Cleverbot can speak in slang or text speak. That can lead to strange conversations, though. A member of our team at cs4fn had an online chat with Cleverbot, and found it pretty weird to have a computer tell him “I want 2 b called Silly Sally”.

Computerised con artists

Most early chatbots were designed just for fun and now they are designed to be useful helpers or to replace human jobs. But some chatbots are made for a more sinister intent. Some years ago, a program called CyberLover was stalking dating chat forums on the Internet. It would strike up flirty conversations with people, then try and get them to reveal personal details, which could then be used to steal people’s identities or credit card accounts. CyberLover even had different programmed personalities, from a more romantic flirter to a more aggressive one. Most people probably wouldn’t be fooled by a robot come-on, but that’s OK. CyberLover didn’t mind rejection: it could start up ten relationships every half an hour.

Chatbots went on to hit the big time in the form of virtual assistants. Apple’s iPhone 4S in 2011 included Siri, a computerised assistant that can find answers to human questions – sometimes with a bit of attitude. Most of Siri’s humorous answers appear to be pre-programmed, but some of them come from Siri’s access to powerful search engines. Then came Alexa in 2014 and suddenly the world is full of chatbots, including taking over social media often for nefarious purposes. Now with the advent of ChatGPT they are finally at a stage where people will happily chat to them like a human (though sometimes it is still frustrating) and they are definitely at the stage of replacing human jobs that involve, for example asking for help and advice. They haven’t yet (at the time of writing) got to the stage of passing a Turing Test. But if computerised conversation continues advancing, we are not too far off from a computer that can pass the Turing test. And while we’re waiting at least we’ve got better games to play than the Victorians had.

by the CS4FN team updated from the original.

More on …

Related Magazines …

Issue 14 cover Alan Turing
Issue 16 cover clean up your language

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

How to get a head in robotics

[This article includes a free papercraft activity with a paper robot that expresses ’emotions’.]

If humans are ever to get to like and live with robots we need to understand each other. One of the ways that people let others know how they are feeling is through the expressions on their faces. A smile or a frown on someone’s face tells us something about how they are feeling and how they are likely to react. We can also tell something of a person’s emotions from their eyes and eyebrows. Some scientists think it might be possible for robots to express feelings this way too, but understanding how a robot can usefully express its ‘emotions’ (what its internal computer program is processing and planning to do next), is still in its infancy. A group of researchers in Poland, at Wroclaw University of Technology, have come up with a clever new design for a robot head that could help a computer show its feelings. It’s inspired by the Teenage Mutant Ninja Turtles cartoon and movie series.

The real Emys orbicularis (European pond turtle) Image by Luis Fernández García, CC BY-SA 3.0 from Wikimedia

The real Teenage Mutant Ninja Turtle

Their turtle-inspired robotic head called EMYS, which stands for EMotive headY System is cleverly also the name of a European pond turtle, Emys orbicularis. Taking his inspiration from cartoons, the project’s principal ‘head’ designer Jan Kedzierski created a mechanical marvel that can convey a whole range of different emotions by tilting a pair of movable discs, one of which contains highly flexible eyes and eyebrows.

Eye see

The CS4FN/LIREC emotional Robot face with three discs like EMYS
Image by CS4FN

The lower disc imitates the movements of the human lower jaw, while the upper disk can mimic raising the eyebrows and wrinkling the forehead. There are eyelids and eyebrows linked to each eye. Have a look at your face in the mirror, then try pulling some expressions like sadness and anger. In particular look at what these do to your eyes. In the robot, as in humans, the eyelids can move to cover the eye. This helps in the expression of emotions like sadness or anger, as your mirror experiment probably showed.

Pop eye

But then things get freaky and fun. Following the best traditions of cartoons, when EMYS is ‘surprised’ the robot’s eyes can shoot out to a distance of more than 10 centimetres! This well-known ‘eyes out on stalks’ cartoon technique, which deliberately over-exaggerates how people’s eyes widen and stare when they are startled, is something we instinctively understand even though our eyes don’t really do this. It makes use of the fact that cartoons take the real world to extremes, and audiences understand and are entertained by this sort of comical exaggeration. In fact it’s been shown that people are faster at recognising cartoons of people than recognising the un- exaggerated original.

High tech head builder

The mechanical internals of EMYS consist of lightweight aluminium, while the covering external elements, such as the eyes and discs, are made of lightweight plastic using 3D rapid prototyping technology. This technology allows a design on the computer to be ‘printed’ in plastic in three dimensions. The design in the computer is first converted into a stack of thin slices. Each slice of the design, from the bottom up, individually oozes out of a printer and on to the slice underneath, so layer-by-layer the design in the computer becomes a plastic reality, ready for use.

Facing the future

A ‘gesture generator’ computer program controls the way the head behaves. Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simple commands to the high-speed motors, moving the various lightweight parts of the face. In this way EMYS can behave in an amazingly fluid way – its eyes can ‘blink’, its neck can turn to follow a person’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social robot FLASH (FLexible Autonomous Social Helper) and also with other robot bodies as part of the LIREC project (www.lirec.eu [archived]). This big project explores the question of how robot companions could interact with humans, and helps find ways for robots to usefully show their ‘emotions’.

Do try this at home

You can program a paper version of an EMYS-like robot. Download and follow the instructions on the Emotion Machine in the printable version below and build your own EMYS.

Print, cut out and make your own emotional robot. The strips of paper at the top (‘sliders’) containing the expressions and letters are slotted into the grooves on the robot’s face and happy or annoyed faces can created by moving the sliders.

By selecting a series of different commands in the Emotion Engine boxes, the expression on EMYS’s face will change. How many different expressions can you create? What are the instructions you need to send to the face for a particular expression? What emotion do you think that expression looks like – how would you name it? What would you expect the robot to be ‘feeling’ if it pulled that face?

Emotion Machine Sheet - a robot head with strips to thread foreyes, eyebrow and mouth
Click on the image to go to the download page. Activity sheet by CS4FN

Go further

Why not draw your own sliders, with different eye shapes, mouth shapes and so on. Explore and experiment! That’s what computer scientists do.

Paul Curzon, Queen Mary University of London


More on …

Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The machines can translate now

The Rosetta Stone
Rosetta Stone: with translation of the same text in three languages. Image © Hans Hillewaert CC BY-SA 4.0 from wikimedia.

“The Machines can translate now…
…I SAID ‘THE MACHINES CAN TRANSLATE NOW'”

The stereotype of the Englishman abroad when confronted by someone who doesn’t speak English is just to say it louder. That could soon be a thing of the past as portable devices start to gain speech recognition skills and as the machines get better at translating between languages.

Traditionally machine translation has involved professional human linguists manually writing lots of translation rules for the machines to follow. Recently there have been great advances in what is known as statistical machine translation where the machine learns the translations rules automatically. It does this using a parallel corpus*: just lots of pairs of sentences; one a sentence in the original language, the other its translation. Parallel corpora* are extracted from multi-lingual news sources like the BBC web site where professional human translators have done the translations.

Let’s look at an example translation of the accompanying original arabic:

Machine Translation: Baghdad 1-1 (AFP) – The official Iraqi news agency reported that the Chinese vice-president of the Revolutionary Command Council in Iraq, Izzat Ibrahim, met today in Baghdad, chairman of the Saudi Export Development Center, Abdel Rahman al-Zamil.

Human Translation: Baghdad 1-1 (AFP) – Iraq’s official news agency reported that the Deputy Chairman of the Iraqi Revolutionary Command Council, Izzet Ibrahim, today met with Abdul Rahman al-Zamil, Managing Director of the Saudi Center for Export Development.

This example shows a sentence from an Arabic newspaper then its translation by the Queen Mary University of London’s statistical machine translator, and finally a translation by a professional human translator. The statistical translation does allow a reader to get a rough understanding of the original Arabic sentence. There are several mistakes, though.

Mistranslating the “Managing Director” of the export development center as its “chairman” is perhaps not too much of a problem. Mistranslating “Deputy Chairman” as the “Chinese vice-president” is very bad. That kind of mistranslation could easily lead to grave insults!

That reminds me of the point in ‘The Hitch-Hiker’s Guide to the Galaxy’ where Arthur Dent’s words “I seem to be having tremendous difficulty with my lifestyle,” slip down a wormhole in space-time to be overheard by the Vl’hurg commander across a conference table. Unfortunately this was understood in the Vl’hurg tongue as the most dreadful insult imaginable, resulting in them waging terrible war for centuries…

For now the human’s are still the best translators but the machines are learning from them fast!

Paul Curzon, Queen Mary University of London

*corpus and corpora = singular and plural for the word used to describe a collection of written texts, literally a ‘body’ of text. A corpus might be all the works written by one author, corpora might be works of several authors.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Letters from the Victorian Smog: Braille: binary, bits & bytes

We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information. That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.

Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.

Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.

He was inspired by an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk).

The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.

He had invented the first modern
form of writing based on binary.

Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.

Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.

The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.

Paul Curzon and Jo Brodie, Queen Mary University of London


More on…

Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Only the fittest slogans survive!

Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!

A mastery of language

To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.

First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.

From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.

Breeding Slogans

New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.

Culling Slogans

Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.

Many generations later…

The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.

More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.

(From the archive)


More on …

Related Magazines …

Issue 22 Cover Creative Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

What are birds actually saying?

Birds make so much noise, and it’s very complex. Is it just babble, or are they saying complicated things to each other? If so, could we work out what they are saying, what it means? Could we learn their language and speak to the birds?

We know that bird communication is not as complicated as the words and sentences in human speech. So far, no one has been able to find grammatical patterns like those we find in human language. There apparently aren’t rules for birds like the ones we have about verbs and nouns. Birds don’t have to learn grammar! Exactly how complex bird languages are is still hotly debated, though.

Sometimes they’re passing on information about predators, or food, or sometimes just advertising their own fitness – showing off to get a mate (a bit like karaoke nights). Scientists have proved that such specific kinds of information are in the sounds birds make by observing bird behaviour. By playing recordings of birds and seeing how other birds react, they can see what information was communicated by a particular sound. If you play a ‘predator near’ call, for example, then other birds flee, but they stay put if you play other calls. They get the message.

Birds are definitely passing on
specific information when they sing.

It turns out some birds have even learnt the languages of other animals and use it both to help those other animals and to support a life of crime. Many animals listen for the alarm calls of the animals around them, and so flee when others see a problem. Birds called Drongos, for example, act as lookouts for Meerkats, giving warning calls when they see Meerkat predators, allowing them to return to the safety of their burrows. However, the Drongos also sound false alarms every so often. They do it when they see a Meerkat with some juicy morsel. As the Meerkats run, the Drongo swoops in to steal the abandoned food.

Unfortunately for the Drongo, Meerkats are quite clever and get wise to the con. Eventually, they start to ignore the Drongo and only listen for their own Meerkat sentry’s call. The Drongo has another trick though. They are really good at mimicking sounds they hear, just like parrots. They have learnt to speak Meerkat just like the scientists do in experiments. So when the Meerkats stop reacting, the Drongos just switch tactics and start making perfect Meerkat language alarm calls instead. Once again the food is theirs.

Drongos give false alarms so they can steal food.

While most of us can’t reproduce bird sounds ourselves, and so talk directly to animals, we can certainly write programs to do it. In Star Wars, C3PO is a master of languages, speaking millions. Real robots of the near future will be able to mimic the sounds of whatever animals they wish and communicate with them in at least the simple ways that animals of different species listen and talk to each other. Perhaps something like this might be used to help protect endangered species from their predators, for example, watching for hawks and issuing timely warnings. We just have to hope they don’t turn to the Dark Side, like the Drongos, and use these skills to support a life of crime.

Dan Stowell and Paul Curzon, Queen Mary University of London

Watch …

Magazine

  • Issue 21: Computing Sounds Wild
    • Computing Sounds Wild explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.
The front cover of issue 21 of CS4FN called Computing Sounds Wild

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Stopping sounds getting left behind: the Bela computer

Clock submerged under blue ripples of sound
Clock Waves Image by Gerd Altmann from Pixabay

Computer-based musical instruments are so flexible and becoming more popular. They have had one disadvantage though. The sound could drag behind the musician in a way that made some digital instruments seem unplayable. Thanks to a new computer called Bela, that problem may now be a thing of the past.

If you pluck a guitar string or thwack a drum the sound you hear is instantaneous. Well, nearly. There’s a tiny delay. The sound still has to leave the instrument and travel to your ear. The vibration of the string or drum skin pushes the air back and forth, and vibrating air is all a sound is. Your ear receives the sound as soon as that vibrating air gets to you. Then your brain has to recognise it as a sound (and tell you what kind of sound it is, which direction it came from, which instrument produced it and so on!). The time it takes for sound and then your brain to do all that is measured in tens of milliseconds – thousandths of a second. It is called ‘latency‘, not because the delay makes it ‘late’ (though it does!), but from the Latin word latens which means hidden or concealed, because the time between the signal being created and being received, it is hidden from us.

Digital instruments take slightly longer than physical instruments, however, because electronic circuitry and computer processing is involved. It’s not just the sound going through air to ear but a digital signal whizzing through a circuit, or being processed by a computer, first to generate the sound which then goes through air to ear.

Your ear (actually your brain) will detect two sounds as being separate if there’s a gap of around 30 milliseconds between them. Drop that gap down to around 10 milliseconds between the sounds and you’ll hear them as a single sound. If that circuit-whizzing adds 10-20 milliseconds then you’re going to notice that the instrument is lagging behind you, making it feel unplayable. Reducing a digital instrument’s latency is therefore a very important part of improving the experience for the musician.

In 2014 Andrew McPherson and colleagues at Queen Mary University of London aimed to solve this problem. They developed Bela, a tiny computer, similar in size to a Raspberry Pi or Arduino, that can be used in a variety of digital instruments but which is special because it has an ultra-low latency of only around 2 milliseconds – super fast.

How does it do it? A computer can seem to run slowly if it is trying to do lots of things at the same time (e.g. lots of apps running or too many windows open at once). That is when the experience for the user can be a bit glitchy. Bela works by prioritising the audio signal above ALL other activities to ensure that, no matter what else the computer is doing, the gap between input (pressing a key) and output (hearing a sound) is barely noticeable. The small size of Bela also makes it completely portable and so easy to use in musical performances without needing the performer to be tethered to a large computer.

There is definitely a demand for such a computer amongst musicians. Andrew and the team wanted to make Bela available, so began fundraising through Kickstarter to create more kits. Their fundraiser reached £5,000 within four hours and within a month they’d raised £54,000, so production could begin and they launched a company, Augmented Instruments Ltd, to sell the Bela hardware kits.

Bela allows musicians to stop worrying about the sounds getting left behind. Instead, they can just get on with playing and creating amazing sounds.

Jo Brodie and Paul Curzon, Queen Mary University of London

Watch…

 

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ada Lovelace: Visionary

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

Paul Curzon, Queen Mary University of London

More on …

Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos