Digital lollipop: no calories, just electronics!

Can a computer create a taste in your mouth? Imagine scrolling down a list of flavours and then savouring your sweet choice from a digital lollipop. Not keen on that flavour, just click and choose a different one, and another and another. No calories, just the taste.

Nimesha Ranasinghe, a researcher at the National University of Singapore is developing a Tongue Mounted Digital Taste Interface, or digital lollipop. It sends tiny electrical signals to the very tip of your tongue to stimulate your taste buds and create a virtual taste!

One of UNESCO’s 2014 ’10 best innovations in the world’, the prototype doesn’t quite look like a lollipop (yet). There are two parts to this sweet sensation, the wearable tongue interface and the control system. The bit you put in your mouth, the tongue interface, has two small silver electrodes. You touch them to the tip of your tongue to get the taste hit. The control system creates a tiny electrical current and a minuscule temperature change, creating a taste as it activates your taste buds.

The prototype lollipop can create sour, salty, bitter, sweet, minty, and spicy sensations but it’s not just a bit of food fun. What if you had to avoid sweet foods or had a limited sense of taste? Perhaps the lollipop can help people with food addictions, just like the e-cigarette has helped those trying to give up smoking?
Perhaps the lollipop can help people with food addictions

But eating is more than just a flavour on your tongue, it is a multi-modal experience, you see the red of a ripe strawberry, hear the crunch of a carrot, feel sticky salt on chippy fingers, smell the Sunday roast, anticipate that satisfied snooze afterwards. How might computers simulate all that? Does it start with a digital lollipop? We will have to wait and see, hear, taste, smell, touch and feel!

Taste over the Internet

The Singapore team are exploring how to send tastes over the Internet. They have suggested rules to send ‘taste’ messages between computers, called the Taste Over Internet Protocol, including a messaging format called TasteXML They’ve also outlined the design for a mobile phone with electrodes to deliver the flavour! Sweet or salt anyone?

Jane Waite, Queen Mary University of London

More on


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

The tale of the mote and the petrel

Giant petrel flying over ice and rock
Image by Eduardo Ruiz from Pixabay
Image by Eduardo Ruiz from Pixabay 

Biology and computer science can meet in some unexpected, not to mention inhospitable, places. Who would have thought that the chemical soup in the nests of Petrels studied by field biologists might help in the development of futuristic dust-sized computers, for example?

Just Keep Doubling

One of the most successful predictions in Computer Science was made by Gordon Moore, co-founder of Intel. Back in 1965 he suggested that the number of transistors that can be squeezed onto an integrated circuit – the hardware computer processors are made of – doubled every few years: computers get ever more powerful and ever smaller. In the 60 or so years since Moore’s paper it has remained an amazingly accurate prediction. Will it continue to hold though or are we reaching some fundamental limit? Researchers at chip makers are confident that Moore’s Law can be relied on for the foreseeable future. The challenge will be met by the material scientists, the physicists and the chemists. Computer scientists must then be ready for the Law’s challenge too: delivering the software advances so that its trends are translated into changes in our everyday lives. It will lead to ever more complex systems on a single chip and so ever smaller computers that will truly disappear into the environment.

Dusting computers

A wave of bright specks sparking with light
Image by Gosia K. from Pixabay

Motes are one technology developed on the back of this trend. The aim is to create dust-sized computers. For example, the worlds smallest computer as of 2015 was the Michigan Micro Mote. It was only a few milimetres big but was a fully working computer system able to power itself, sense the world, process the data it collects and communicate data collected to other computers. In 2018 IBM announced a computer with sides a millimetre long. Rising to the challenge, the Michigan team soon announced their new mote with sides a third of a millimetre! The shrinking of motes will is not likely to stop!

Scatter motes around the environment and they form unobservable webs of intelligent sensors. Scatter them on a battlefield to detect troop movements or on or near roads to monitor traffic flow or pollution. Mix them in concrete and monitor the state of a bridge. Embed them in the home to support the elderly or in toys to interact with the kids. They are a technology that drives the idea of the Internet of Things where everyday objects become smart computers.

Battery technology has long been
the only big problem that remains.

What barriers must be overcome to make dust sized motes a ubiquitous reality? Much of the area of a computer is taken up by its connections to the outside world – all those pins allowing things to be plugged in. They can now be replaced by wireless communications. Computers contain multiple chips each housing separate processors. It is not the transistors that are the problem but the packaging – the chip casings are both bulky and expensive. Now we have “multicore” chips: large numbers of processors on a single small chip courtesy of Moore’s Law. This gives computer scientists significant challenges over how to develop software to run on such complicated hardware and use the resources well. Power can come from solar panels to allow them to constantly recharge even from indoor light. Even then, though, they still need batteries to store the energy. Battery technology is the only big problem that remains.

Enter the Petrels

But how do you test a device like that? Enter the Petrels. Intel’s approach is not to test futuristic technology on average users but to look for extreme ones who believe a technology will deliver them massive benefits. In the case of Motes, their early extreme users were field biologists who want to keep tabs on birds in extremely harsh field conditions. Not only is it physically difficult for humans to observe sea birds’ nests on inhospitable cliffs but human presence disturbs the birds. The solution: scatter motes in the nests to detect heat, humidity and the like from which the state and behaviour of the birds can be deduced. A nest is an extremely harsh environment for a computer though, both physically and chemically. A whole bunch of significant problems, overlooked by normal lab testing, must be overcome. The challenge of deploying Motes in such a harsh environment led to major improvements in the technology.


Moore’s Law is with us for a while yet, and with the efforts of material scientists, physicists, chemists, computer scientists and even field biologists and the sea birds they study it will continue to revolutionise our lives.

Paul Curzon, Queen Mary University of London (Updated from the archive)

More on …


EPSRC supports this blog through research grant EP/W033615/1. 

Fran Allen: Smart Translation

Computers don’t speak English, or Urdu or Cantonese for that matter. They have their own special languages that human programmers have to learn if they want to create new applications. Even those programming languages aren’t the language computers really speak. They only understand 1s and 0s. The programmers have to employ translators to convert what they say into Computerese (actually binary): just as if I wanted to speak with someone from Poland, I’d need a Polish translator. Computer translators aren’t called translators though. They are called ‘compilers’, and just as it might be a Pole who translated for me into Polish, compilers are special programs that can take text written in a programming language and convert it into binary.

The development of good compilers has been one of the most important advancements from the early years of computing and Fran Allen, one of the star researchers of computer giant, IBM, was awarded the ‘Turing Prize’ for her contribution. It is the Computer Science equivalent of a Nobel Prize. Not bad given she only joined IBM to clear her student debts from University.

Fran was a pioneer with her groundbreaking work on ‘optimizing compilers’. Translating human languages isn’t just about taking a word at a time and substituting each for the word in the new language. You get gibberish that way. The same goes for computer languages.

Things written in programming languages are not just any old text. They are instructions. You actually translate chunks of instructions together in one go. You also add a lot of detail to the program in the translation, filling in every little step.

Suppose a Japanese tourist used an interpreter to ask me for directions of how to get to Sheffield from Leeds. I might explain it as:

“Follow the M1 South from Junction 43 to Junction 33”.

If the Japanese translator explained it as a compiler would they might actually say (in Japanese):

“Take the M1 South from Junction 43 as far as Junction 42, then follow the M1 South from Junction 42 as far as Junction 41, then follow … from Junction 34 as far as Junction 33”.

Computers actually need all the minute detail to follow the instructions.

The most important thing about computer instructions (i.e., programs) is usually how fast following them leads to the job getting done. Imagine I was on the Information desk at Heathrow airport and the tourist wanted to get to Sheffield. I’ve never done that journey. I do know how to get from Heathrow to Leeds as I’ve done it a lot. I’ve also gone from Leeds to Sheffield a lot, so I know that journey too. So the easiest way for me to give instructions for getting from London to Sheffield, without much thought and be sure it gets the tourist there might be to say:

Go from Heathrow to Leeds:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Leeds at Junction 43

Then go from Leeds to Sheffield:

  1. Take the M1 South to Sheffield at Junction 33

That is easy to write and made up of instructions I’ve written before perhaps. Programmers reuse instructions like this a lot – it both saves their time and reduces the chances of introducing mistakes into the instructions. That isn’t the optimum way to do the journey of course. You pass the turn off for Sheffield on the way up. An optimizing compiler is an intelligent compiler. It looks for inefficiency and actually converts it into a shorter and faster set of instructions. The Japanese translator, if acting like an optimizing compiler, would actually remove the redundant instructions from the ones I gave and simplify it (before converting it to all the junction by junction detailed steps) to:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Sheffield Junction 33

Much faster! Much more intelligent! Happier tourists!

Next time you take the speed of your computer for granted, remember it is not just that fast because the hardware is quick, but because, thanks to people like Fran Allen, the compilers don’t just do what the programmers tell them to do. They are far smarter than that.

Paul Curzon, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

A gendered timeline of technology

(Updated from previous versions, July 2025)

Women have played a gigantic role in the history of computing. Their ideas form the backbone to modern technology, though that has not always been obvious. Here is a gendered timeline of technology innovation to offset that.

825 Muslim scholar Al-Khwarizmi kicks it all off with a book on algorithms – recipes on how to do computation pulling together work of Indian mathematicians. Of course back then it’s people who do all the computation, as electronic computers won’t exist for another millennium.

1587 Mary, Queen of Scots loses her head because the English Queen, Elizabeth I, has a crack team of spies that are better at computer science than Mary’s are. They’ve read the Arab mathematician Al-Kindi’s book on the science of cryptography so they can read all Mary’s messages. More

1650 Maria Cunitz publishes Urania Propitia an updated book of astronomical tables based on the ones by Johannes Kepler. She gives an improved algorithm over his for calculating the positions of the planets in the sky. That and her care as a human computer make it the most accurate to date. More.

1757 Nicole-Reine Lepaute works as a human computer as part of a team of three calculating the date Halley’s comet will return to greater accuracy (a month) than Halley had (his prediction was over a year).

1784 Mary Edwards is paid as a human computer helping compile The Nautical Almanac, a book of data used to help sailors work out their position (longitude) at sea. She had been doing the work in her husband’s name for about 10 years prior to this.

1787 Caroline Herschel becomes the first woman to be paid to be an astronomer (by King George III) as a result of finding new comets and nebulae. She goes on to spend 2 years creating the most comprehensive catalogue of stars ever created to that point. This involves acting as a human computer doing vast amounts of computation calculating positions.

1818 Mary Shelley writes the first science fiction novel on artificial life, Frankenstein. More

1827 Mary Web publishes the first ever Egyptian Mummy novel. Set in the future, in it she predicts a future with robot surgeons, AI lawyers and a version of the Internet. More

1842 Ada Lovelace and Charles Babbage work on the analytical engine. Lovelace shows that the machine could be programmed to calculate a series of numbers called Bernoulli numbers, if Babbage can just get the machine built. He can’t. It’s still Babbage who gets most of the credit for the next hundred-plus years. Ada predicts that one day computers will compose music, A century or so later she is proved right. More

1854 George Boole publishes his work on a logical system that remains obscure until the 1930s, when Claude Shannon discovers that Boolean logic can be electrically applied to create digital circuits.

1856 Statistician (and nurse) Florence Nightingale returns from the Crimean War and launches the subject of data visualisation to convince politicians that soldiers are dying in hospital because of poor sanitation. More

1912 Thomas Edison claims “woman is now centuries, ages, even epochs behind man”, the year after Marie Curie wins the second of her two Nobel prizes.

1927 Metropolis, a silent science fiction film, is released. Male scientists kidnap a woman and create a robotic version of her to trick people and destroy the world. The robotic Maria dances nude to ‘mesmerise’ the workers. The underlying assumptions are bleak: women with power should be replaced with docile robots, bodies are more important than brains, and working class men are at the whim of beautiful gyrating women. Could the future be more offensive?

1931 Mary Clem starts work as a human computer at Iowa State College. She invents the zero check as a way of checking for errors in algorithms human computers (the only kind at the time) are following.

1941 Hedy Lamarr, better know as a blockbuster Hollywood actress co-invents frequency hopping: communicating by constantly jumping from one frequency to another. This idea underlies much of today’s mobile technology. More

1943 Thomas Watson, the CEO of IBM, announces that he thinks: “there is a world market for maybe 5 computers”. It’s hard to believe just how wrong he was!

1945 Grace Murray Hopper and her associates are hard at work on an early computer called Mark I when a moth causes the circuit to malfunction. Hopper (later made an admiral) refers to this as ‘debugging’ the circuit. She tapes the bug to her logbook. After this, computer malfunctions are referred to as ‘bugs’. Her achievements didn’t stop there: she develops the first compiler and one of the pioneering programming languages. More

1946 The Electronic Numerical Integrator and Computer is the world’s first general purpose electronic computer. The main six programmers, all highly skilled mathematicians, were women. They were seen to be more capable programmers because it was considered too repetitive for men and as a result it was labelled ‘sub-professional’ work. Once more men realised that it was interesting and fun, programming was re- classed as ‘professional’, the salaries became higher, and men become dominant in the field.

1949 A Popular Mechanics magazine article predicts that the computers of the future might weigh “as little as” 1.5 tonnes each. That’s over 10,000 iPhones!

1958 Daphne Oram, a pioneer of electronic music, co-founds the BBC Radiophonic Workshop, responsible for the soundscapes behind hundreds of tv and radio programmes. She suggests the idea of spatial sound where sounds are in specific places. More

1966 Paper published on ELIZA, the first chatbot that in its psychotherapist role, people treat as human. It starts an unfortunately long line of female chatbots. It is named after a character from the play Pygmalion about a working class woman taught to speak in a posh voice. The Greek myth of Pygmalion is about a male sculptor falling in love with a statue he made. Hmm… Joseph Weizenbaum agrees the choice was wrong as it stereotyped women.

1967 The original series of TV show Star Trek includes an episode where mad ruler Harry Mudd runs a planet full of identical female androids who are ‘fully functional’ at physical pleasure to tend to his whims. But that’s not the end of the pleasure bots in this timeline…

1969 Margaret Hamilton is in charge fo the team developing the in-flight software for the Apollo missions including the Apollo 11 Moon Landing. More.

1969 DIna St Johnston founds the UKs first independent software house. It is a massive success writing software for lots of big organisations including the BBC and British Rail. More.

1972 Karen Spärck Jones publishes a paper describing a new way to pick out the most important documents when doing searches. Twenty years later, once the web is up and running, the idea comes of age. It’s now used by most search engines to rank their results.

1972 Ira Levin’s book ‘The Stepford Wives’ is published. A group of suburban husbands kill their successful wives and create look-alike robots to serve as docile housewives. It’s made into a film in 1975. Sounds like those men were feeling a bit threatened.

1979 The US Department of Defence introduces a new programming language called Ada after Ada Lovelace.

1982 The film Blade Runner is released. Both men and women are robots but oddly there are no male robots modelled as ‘basic pleasure units’. Can’t you guys think of anything else?

1984 Technology anthropologist Lucy Suchman draws on social sciences research to overturn the current computer science thinking on how best to design interactive gadgets that are easy to use. She goes on to win the Benjamin Franklin Medal, one of the oldest and most prestigious science awards in the world.

1985 In the film Weird Science, two teenage supergeeks hack into the government’s mainframe and instead of using their knowledge and skills to do something really cool…they create the perfect woman. Yawn. Not again.

1985 Sophie Wilson designs the instruction set for the first ARM RISC chip creating a chip that is both faster and uses less energy than traditional designs: just what you need for mobile gadgets. This chip family go on to power 95% of all smartphones. More

1988 Ingrid Daubechies comes up with a practical way to use ‘wavelets’, mathematical tools that when drawn are wave-like. This opens up new powerful ways to store images in far less memory, make images sharper,
and much, much more. More

1995 Angelina Jolie stars as the hacker Acid Burn in the film Hackers, proving once and for all that women can play the part of the technologically competent in films.

1995 Ming Lin co-invents algorithms for tracking moving objects and detecting collisions based on the idea of bounding them with boxes. They are used widely in games and computer-aided design software.

2004 A new version of The Stepford Wives is released starring Nicole Kidman. It flops at the box office and is panned by reviewers. Finally! Let’s hope they don’t attempt to remake this movie again.

2005 The president of Harvard University, Lawrence Summers, says that women have less “innate” or “natural” ability than men in science. This ridiculous remark causes uproar and Summers leaves his position in the wake of a no-confidence vote from Harvard faculty.

2006 Fran Allen is the first woman to win the Turing Award, which is considered the Nobel Prize of computer science, for work dating back to the 1950s. Allen says that she hopes that her award gives more “opportunities for women in science, computing and engineering”. More

2006 Torchwood’s technical expert Toshiko Sato (Torchwood is the organisation protecting the Earth from alien invasion in the BBC’s cult TV series) is not only a woman but also a quiet, highly intelligent computer genius. Fiction catches up with reality at last.

2006 Jeannette Wing promotes the idea of computational thinking as the key problem solving skill set of computer scientists. It is now taught in schools across the world.

2008 Barbara Liskov wins the Turing Award for her work in the design of programming languages and object-oriented programming. This happens 40 years after she becomes the first woman in the US to be awarded a PhD in computer science. More

2009 Wendy Hall is made a Dame Commander of the Order of the British Empire for her pioneering work on hypermedia and web science. More

2011  Kimberly Bryant, an electrical engineer and computer scientist founds Black Girls Code to encourage and support more African-American girls to learn to code. Thousands of girls have been trained. More

2012 Shafi Goldwasser wins the Turing Award. She co-invented zero knowledge proofs: a way to show that a claim being made is true without giving away any more information. This is important in cryptography to ensure people are honest without giving up privacy. More

2015 Sameena Shah’s AI driven fake news detection and verification system goes live giving Reuters an advantage of several years over competitors. More

2016 Hidden Figures, the film about Katherine Johnson, Dorothy Vaughan, and Mary Jackson, the female African-American mathematicians and programmers who worked for NASA supporting the space programme released. More

2018 Gladys West is inducted into the US Air Force Hall of Fame for her central role in the development of satellite remote sensing and GPS. Her work directly helps us all. More

2025 Ursula Martin is made a Dame Commander of the Order of the British Empire for services to Computer Science. She was the first female Professor of Computer Science in the UK focussing on theoretical Computer Science, Formal Methods and later maths as a social enterprise. She was the first true expert to examine the papers of Ada Lovelace. More.

It is of course important to remember that men occasionally helped too! The best computer science and innovation arise when the best people of whatever gender, culture, sexuality, ethnicity and background, disabled or otherwise, work together.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Operational Transformation

Algorithms for writing together

How do online word processing programs manage to allow two or more people to change the same document at the same time without getting in a complete muddle? One of the really key ideas that makes collaborative writing possible was developed by computer scientists, Clarence Ellis and Simon Gibbs. They called their idea ‘Operational transformation’.

Let’s look at a simple example to illustrate the problem. Suppose Alice and Bob share a document that starts:

"MEETING AT 10AM"

First of all one computer, called the ‘server’, holds the actual ‘master’ document. If the network goes down or computers crash then its that ‘master’ copy that is the real version everyone sees as the definitive version.

Both Alice and Bob’s computers can connect to that server and get copies to view on their own machines. They can both read the document without problem – they both see the same thing. But what happens if they both start to change it at once? That’s when things can get mixed up.

Let’s suppose Alice notices that the time in the document should be PM not AM. She puts her cursor at position 14 and replaces the letter there with P. As far as the copy she is looking at is concerned, that is where the faulty A is. Her computer sends a command to the server to change the master version accordingly, saying

CHANGE the character at POSITION 14 to P.

The new version at some point later will be sent to everyone viewing. However, suppose that at the same time as Alice was making her change, Bob notices that the meeting is at 1 not 10. He moves his cursor to position 13, so over the 0 in the version he is looking at, and deletes it. A command is sent to the server computer:

DELETE the character at POSITION 13.

Now if the server receives the instructions in that order then all is ok. The document ends up as both Bob and Alice intended. When they are sent the updated version it will have done both their changes correctly:

"MEETING AT 1PM"

However, as both Bob and Alice are editing at the same time, their commands could arrive at the server in either order. If the delete command arrives first then the document ends up in a muddle as first the 13th position is deleted giving.

"MEETING AT 1AM"

Then, when Alice’s command is processed the 14th character is changed to a P as it asks. Unfortunately, the 14th character is now the M because the deleted character has gone. We end up with

"MEETING AT 1AP"

Somehow the program has to avoid this happening. That is where the operational transformation algorithm comes in. It changes each instruction, as needed, to take other delete or insert instructions into account. Before the server follows them they are changed to ones so that they give the right result whatever order they came in.

So in the above example if the delete is done first, then any other instructions that arrive that apply to the same initial version of the document are changed to take account of the way the positions have changed due to the already applied deletion. We would get and so apply the new instructions:

STARTING FROM "MEETING AT 10AM"
DELETE the character at POSITION 13.
CHANGE the character at POSITION (14-1) to P.

Without Operational Transformation two people trying to write a document together would just be frustrating chaos. Online editing would have to be done the old way of taking it in turns, or one person making suggestions for the other to carry out. With the algorithm, thanks to Clarence Ellis and Simon Gibbs, people who are anywhere in the world can work on one document together. Group writing has changed forever.

Paul Curzon, Queen Mary University of London


This article was originally published on the CS4FN website.

More on …


EPSRC supports this blog through research grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.

Engineering a cloak of invisibility: manipulating light with metamaterials

by Akram Alomainy and Paul Curzon, QMUL

You pull a cloak around you and disappear! Reality or science fiction? Harry Potter’s invisibility cloak is surely Hogwarts’ magic that science can’t match. Even in Harry Potter’s world it takes powerful magic and complicated spells to make it work. Turns out even that kind of magic can be done with a combination of materials science and computer science. Professor Susumu Tachi of the University of Tokyo has developed a cloak made of thousands of tiny beads. Cameras video what is behind you and a computer system then projects the appropriate image onto the front of the cloak. The beads are made of a special material called retro-reflectrum. It is vital to give the image a natural feel – normal screens give too flat a look, losing the impression of seeing through the person. Now you see me, now you don’t at the flick of a switch.

But could an invisibility cloak, without tiny screens on it, ever be a reality? It sounds impossible especially if you understand how light behaves. It bounces off the things around us, travelling in straight lines. You see them when that reflected light eventually reaches your eyes. I can see the red toy over there because red light bounced from it to me. For it to be invisible, no light from it must reach my eyes, while at the same time light from everything else around should. How could that be possible? Akram Alomainy of Queen Mary, University of London, tells us more.

Well maybe things aren’t quite that simple…halls of mirrors, rainbows, polar bears and desert mirages all suggest some odd things can happen with light! They show that manipulating light is possible and that we may even be able to bend it in a way that alters the way things look – even humans.

Light fantastic

Have you ever wondered how the hall of mirrors in a fun fair distorts your reflection? Some make us look short and fat while others make us tall and slim! It’s all about controlling the behaviour of light. The light rays still travel in straight lines, but the mirrors deceive the eye. The light seems to arrive from a different place to reality because the mirrors are curved, not flat, making the light bounce at odd angles.

A rainbow is an object we see that isn’t really there. They occur because white light doesn’t actually exist. It is just coloured light all mixed up. When it hits a surface it separates back into individual colours. The colour of an object you see depends on which colours pass through or get reflected, and which get absorbed. The light is white when it hits the raindrops, but then comes out as the whole spectrum of colours. They head off at slightly different angles, which is why they appear in the different rainbow positions.

What about polar bears? Did you know that they have black skins and semi-transparent hair? You see them as white because of the way the hollow hairs reflect sunlight.

So what does this have to do with invisibility? Well, it suggests that with light all is not as it seems. Perhaps we can manipulate it to do anything we want.

Water! Water!

Now for the clincher – mirages! They show that invisibility cloaks ought to be a possibility. Light from the sun travels in a straight line through the sky. That means we see everything as it is. Except not quite. In places like deserts where the temperature is very high at noon, apparently weird things happen to the light. The difference between the temperature, and thus the difference in density between the higher air layers and the levels closer to the ground can be quite large. That temperature difference makes light coming from the sky change direction as it passes through each layer. It bends rather than just travelling in a straight line to us. It is that image of the sky that looks like the pool of water – the mirage. Our brains assume the light travelled in a straight line, so they misinterpret its location. Now, to make something invisible we just need to make light bend round it. That invisibility cloak is a possibility if we can just engineer what mirages do – bend light!

Nano-machines

That is the basic idea and it is an area of science called ‘transformation optics’ that makes it possible. The science tells us about the properties that each point of an object must have to make light waves travel in any particular way we wish through it. To make it happen engineers must then create special materials with those properties. These materials are known as metamaterials. Their properties are controlled using electromagnetism, which is where the electronic engineers come in! You can think of them as being made of vast numbers of tiny electrical machines built into big human-scale structures. Each tiny machine is able to control how light passes through it, even bending light in a way no natural material could. If the machines are small enough – ‘nanotechnology’ as small as the wavelength of light – and their properties can be controlled really precisely to match the science’s prediction, then we can make light passing through them do anything we want. For invisibility, the aim is to control those properties so the light bends as it passes through a metamaterial cloak. If the light comes out the other side of the cloak unchanged and travelling in the same direction as it entered, while avoiding objects in the middle, then those objects will be invisible.

Now you see it…

Simple cloaking devices that work this way have already been created but they are still very limited. One of the major challenges is the range of light they can work with. At the moment it’s possible to make a cloak that bends a single colour frequency, but not all light. As Yang Hao, a professor working in this area at Queen Mary, notes: “The obstacle engineers face is the complex manufacturing techniques needed to build devices that can bend light across the whole visible light spectrum. However, with the progress being made in nanotechnologies this could become a possibility in the near future”.

Perhaps we should leave the last word to J.K. Rowling: “A suspicious object like that, it was clearly full of Dark Magic.” So while we should appreciate the significance of such an invention we should perhaps be careful about the negative consequences!


More on …

Related Magazines…


EPSRC supports this blog through research grant EP/W033615/1.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay
Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

More on …

EPSRC supports this blog through research grant EP/W033615/1. 

Manufacturing Magic

by Howard Williams, Queen Mary University of London (From the archive)

Can computers lend a creative hand to the production of new magic tricks? That’s a question our team, led by Peter McOwan at Queen Mary, wrestled with.

The idea that computers can help with creative endeavours like music and drawing is nothing new – turn the radio on and the song you are listening to will have been produced with the help of a computer somewhere along the way, whether it’s a synthesiser sound, or the editing of the arrangement, and some music is created purely inside software. Researchers have been toiling away for years, trying to build computer systems that actually write the music too! Some of the compositions produced in this way are surprisingly good! Inspired by this work, we decided to explore whether computers could create magic.

The project to build creative software to help produce new magic tricks started with a magical jigsaw that could be rearranged in certain ways to make objects on its surface disappear. Pretty cool, but what part did the computer play? A jigsaw is made up of different pieces, each with four sides – the number of different ways all these pieces can be put together is very large; for a human to sit down and try out all the different configurations would take many hours (perhaps thousands, if not millions!). Whizzing through lots of different combinations is something a computer is very good at. When there are simply too many different combinations for even a computer to try out exhaustively, programmers have to take a different approach.

Evolve a jigsaw

A genetic algorithm is a program that mimics the biological process of natural selection. We used one to intelligently search through all the interesting combinations that the jigsaw might be made up from. A population of jigsaws is created, and is then ‘evolved’ via a process that evaluates how good each combination is in each generation, gradually weeding out the combinations that wouldn’t make good jigsaws. At the end of the process you hope to be left with a winner; a jigsaw that matches all the criteria that you are hoping for. In this particular case, we hoped to find a jigsaw that could be built in two different ways, but each with a different number of the same object in the picture, so that you could appear to make an object disappear and reappear again as you made and remade it. The idea is based on a very old trick popularised by Sam Lloyd, but our aim was to create a new version that a human couldn’t, realistically, have come up with, without a lot of free time on their hands!

To understand what role the computer played, we need to explore the Genetic Algorithm mechanism it used to find the best combinations. How did the computer know which combinations were good or bad? This is something creative humans are great at – generating ideas, and discarding the ones they don’t like in favour of ones they do. This creative process gradually leads to new works of art, be they music, painting, or magic tricks. We tackled this problem by first running some experiments with real people to find out what kind of things would make the jigsaw seem more ‘magical’ to a spectator. We also did experiments to find out what would influence a magician performing the trick. This information was then fed into the algorithm that searched for good jigsaw combinations, giving the computer a mechanism for evaluating the jigsaws, similar to the ones a human might use when trying to design a similar trick.

More tricks

We went on to use these computational techniques to create other new tricks, including a card trick, a mind reading trick on a mobile phone, and a trick that relies on images and words to predict a spectator’s thought processes. You can find out more including downloading the jigsaw at www.Qmagicworld.wordpress.com

Is it creative, though?

There is a lot of debate about whether this kind of ‘artificial intelligence’ software, is really creative in the way humans are, or in fact creative in any way at all. After all, how would the computer know what to look out for if the researchers hadn’t configured the algorithms in specific ways? Does a computer even understand the outputs that it creates? The fact is that these systems do produce novel things though – new music, new magic tricks – and sometimes in surprising and pleasing ways, previously not thought of.

Are they creative (and even intelligent)? Or are they just automatons bound by the imaginations of their creators? What do you think?

More on …


EPSRC supports this blog through research grant EP/W033615/1. 

Solving problems you care about

by Patricia Charlton and Stefan Poslad, Queen Mary University of London Queen Mary University of London

The best technology helps people solve real problems. To be a creative innovator you need not only to be able to create a solution that works but also to spot a need in the first place and be able to come up with creative solutions. Over the summer a group of sixth formers on internships at Queen Mary had a go at doing this. Ultimately their aim was to build something from a programmable gadget such as a BBC micro:bit or Raspberry Pi. They therefore had to learn about the different possible gadgets they could use, how to program them and how to control the on-board sensors available. They were then given the design challenge of creating a device to solve a community problem.

Hearing the bus is here

Tai Kirby wanted to help visually impaired people. He knew that it’s hard for someone with poor sight to tell when a bus is arriving. In busy cities like London this problem is even worse as buses for different destinations often arrive at once. His solution was a prototype that announces when a specific bus is arriving, letting the person know which was which. He wrote it in Python and it used a Raspberry pi linked to low energy Bluetooth devices.

The fun spell

Filsan Hassan decided to find a fun way to help young kids learn to spell. She created a gadget that associated different sounds with different letters of the alphabet, turning spelling words into a fun, musical experience. It needed two micro:bits and a screen communicating with each other using a radio link. One micro:bit controlled the screen while the other ran the main program that allowed children to choose a word, play a linked game and spell the word using a scrolling alphabet program she created. A big problem was how to make sure the combination of gadgets had a stable power supply. This needed a special circuit to get enough power to the screen without frying the micro:bit and sadly we lost some micro:bits along the way: all part of the fun!

Remote robot

Jesus Esquivel Roman developed a small remote-controlled robot using a buggy kit. There are lots of applications for this kind of thing, from games to mine-clearing robots. The big challenge he had to overcome was how to do the navigation using a compass sensor. The problem was that the batteries and motor interfered with the calibration of the compass. He also designed a mechanism that used the accelerometer of a second micro:bit allowing the vehicle to be controlled by tilting the remote control.

Memory for patterns

Finally, Venet Kukran was interested in helping people improve their memory and thinking skills. He invented a pattern memory game using a BBC micro:bit and implemented in micropython. The game generates patterns that the player has to match and then replicate to score points. The program generates new patterns each time so every game is different. The more you play the more complex the patterns you have to remember become.

As they found you have to be very creative to be an innovator, both to come up with real issues that need a solution, but also to overcome the problems you are bound to encounter in your solutions


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Sameena Shah: News you can trust

Having reliable news always matters to us: whether when disasters strike, of knowing for sure what our politicians really said, or just knowing what our favourite celebrity is really up to. Nowadays social networks like Twitter and Facebook are a place to find breaking news, though telling fact from fake-news is getting ever harder. How do you know where to look, and when you find something how do you know that juicy story isn’t just made up?

One way to be sure of stories is from trusted news-providers, like the BBC, but how do they make sure their stories are real. A lot of fake news is created by Artificial Intelligence bots and Artificial Intelligence is part of the solution to beat them.

Sameena Shah realised this early on. An expert in Artificial Intelligence, she led a research team at news provider Thomson Reuters. They provide trusted information for news organisations worldwide. To help ensure we all have fast, reliable news, Sameena’s team created an Artificial Intelligence program to automatically discover news from the mass of social networking information that is constantly being generated. It combines programs that process and understand language to work out the meaning of people’s posts – ‘natural language processing’ – with machine learning programs that look for patterns in all the data to work out what is really news and most importantly what is fake. She both thought up the idea for the system and led the development team. As it was able to automatically detect fake news, when news organisations were struggling with how much was being generated, it gave Thomson Reuters a head-start of several years over other trusted news companies.

Sameena’s ideas and work putting them in to practice has helped make sure we all know what’s really happening.

Paul Curzon, Queen Mary University of London (updated from the archive)

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.