Balls, beams and quantum computers – performing calculations with patterns of light

Photo credit: Galton Box by Klaus-Dieter Keller, Public Domain, via Wikimedia Commons, via the Wikipedia page for the Galton board

Have you played the seaside arcade game where shiny metal balls drops down to ping, ping off little metal pegs and settle in one of a series of channels? After you have fired lots of balls, did you notice a pattern as the silver spheres collect in the channels? A smooth glistening curve of tiny balls forming a dome, a bell curve forms. High scores are harder to get than lower ones. Francis Galton pops up again*, but this time as a fellow Victorian trend setter for future computer design.

Francis Galton invented this special combination of row after row of offset pins and narrow receiving channels to demonstrate a statistical theory called normal distribution: the bell curve. Balls are more likely to bounce their way to the centre, distributing themselves in an elegant sweep down to the left and right edges of the board. But instead of ball bearings, Galton used beans, it was called the bean machine. The point here though is that the machine does a computation – it computes the bell curve.

Skip forward 100 years and ‘Boson Samplers’, based on Galton’s bean machine, are being used to drive forward the next big thing in computer design, quantum computers.

Instead of beans or silver balls computer scientists fire photons, particles of light through minuscule channels on optical chips. These tiny bundles of energy bounce and collide to create a unique pattern, a distribution though one that a normal digital computer would find hard to calculate. By setting it up in different ways, the patterns that result can correspond to different computations. It is computing answers to different calculations set for it.

Through developing these specialised quantum circuits scientists are bouncing beams of light forwards on the path that will hopefully lead to conventional digital technology being replaced with the next generation of supercomputers.

Jane Waite, Queen Mary University of London

Watch…



Related Magazine …

*Francis Galton appears earlier in Issue 20, you can read more about him on page 15 of the PDF. Although a brilliant mathematician he held views about people that are unacceptable today. In 2020 University College London (UCL) changed the name of its Galton Lecture Theatre, which had been named previously in his honour, to Lecture Theatre 115.

EPSRC supports this blog through research grant EP/W033615/1.

Getting off the beach, fast

by Paul Curzon, Queen Mary University of London

Paul goes on holiday and sees how a car park can work like a computer.

Computers get faster and faster every year. How come? Because computer scientists and electronic engineers keep thinking up new tricks, completely new ways to make them go faster. One way has been to shrink the components so signals don’t have as far to go. Another is to use the same trick they were using in a beach car park I came across on holiday.

Woolacombe Sands in Devon is one of the most popular beaches around. There is a great expanse of beautiful sand as well as rocks for kids to climb on and good surfing too. The weather is even good there – well most of the time. The car park, right on the edge of the beach fills in the morning. Since most people arrive early and stay all day it’s a standard price of £5.50 for the day. Entry and exit barriers control the numbers. The entry barrier only allows a car to go in if there is a space and another allows people out when they have paid.

That’s where there is a problem though. The vast majority of people leave around 5pm as the ice cream vans pack up and it’s time to look for dinner. The machine only takes coins, and you insert the money from your car at the barrier. Each driver has to fumble with 5 one-pound coins and a 50p and that takes time. Once the current car moves on out there is then another delay as the driver behind pulls forward to get into a position to put their money in. Without some thought it would lead to long queues behind. Not only that it wouldn’t be very green. Cars are at there worst pumping out pollution when in a jam.

The last thing you want to do to a family who’ve had a great day on your beach is then irritate them by clogging them up in a traffic jam when they try to leave. So what do you do? How can you speed things up (and make sure you aren’t just moving the queue to the morning or to some other ticket machine somewhere else)?

The problem is similar to one in designing a computer chip. Think of the cars as data waiting to be processed (perhaps as part of a calculation) and the barrier as a processing unit where some manipulation of that data is needed. Data waiting to be processed has to be fetched before it can be used, just as the cars have to move up to the barrier before the driver can pay. The fact that the problems are so similar suggests that a solution to one may also be a a solution to the other.

Speed it up

There are lots of ways you could change the system to improve the speed of cars being processed in the car park. This speed that data passes through a system is called the ‘throughput’ of the system. Woolacombe have thought of a simple way to improve their throughput. They put a person with a bit of change next to the barrier to help the drivers. This allows them to keep the relatively simple barrier system they have. It also has advantages in keeping the money in one place and being a foolproof way of ensuring there is a space for everyone who enters. It still maintains all the safeguards of the ticket barrier though. How can that one person speed things up?

What would you do?

So what would YOU do if you were that person? Would you speed things up? Or would you just stand there powerless watching the misery of all those families?

The first thing you could do is to stand by the machine and take the change off the driver and insert it yourself. That will speed things up a little bit because it takes longer for drivers to put the money in as they have to stretch out the window of a car. Also if the driver only has a five pound note you can take it and just insert coins from your change bag rather than wasting time passing it back to the driver to then insert. Similarly if the driver only has 50 pence pieces say, rather than wasting time inserting 10 of them you can take them and insert 5 one-pound coins.

You’ve done some good, and removed problems of the slow people inserting coins but you haven’t really solved the bad problems. Cars aren’t moving at all while you are inserting the 6 coins, and after each car moves through the barrier you are doing nothing but waiting for the next car to pull forward. In an ideal system, with the best throughput, the cars barely stop at all and you are constantly busy.

A Pipeline of Cars

It turns out you can do something about that. It’s called pipelining. There is a way you can be busy dealing with the next car even before it’s got to you. You just have to get ahead of yourself!

How? Before the first car arrives, insert 5 pound coins into the machine and wait. As the driver gets to you and gives you the money, insert his or her 50p, keeping the rest. The barrier opens immediately for the driver who barely has to stop. Better still you are now holding 5 pound coins that you can insert as the next car arrives, leaving you back in an identical situation. That means the next car can drive straight through too, and you are constantly busy as long as there are cars arriving.

Speedy data

So you’ve helped the families leaving the beach, but how might a similar trick speed up a computer? Well you can do a similar thing in the way you get a computer processor to execute the instructions from a program. Suppose your program requires the processor to get some numbers from storage, process them (perhaps multiplying the numbers together) and then store the result somewhere else for later use. Typically a program might do that over and over again, varying where the data comes from and how it is processed.

Early computers would do each instruction in turn – doing the fetching, processing and storing of one instruction before starting the next. But that is just like a car in our car park coming to the barrier, being processed and leaving before the next one moves. Can we pull off the same trick to speed things up? Well, yes of course.

All you need to do is overlap the separate parts. Just as at any time in the car park a car will be driving out, a second will be handing over money and a third pulling forward, the same can happen in the computer. As the first instruction’s result is being stored, the next instruction can already be being processed and the data from the one after that can be fetched from memory. Just by reorganising the way the work is done, we have roughly tripled the speed of our computer as now three things are happening at once.

What we have done is set up a ‘pipeline’ – with a series of instructions all flowing through it, being executed, at the same time. Woolacombe has a pipeline of cars, but in a computer we pipeline data. Either way things get done faster and people are happier.

Computer science happens in some unexpected places – even at the beach – but then perhaps that isn’t so surprising given computers are made of sand!


This article was originally published on the CS4FN website.


Other beach-themed articles on this blog include how the origins of how Paul learned to program while on holiday (“The beach, the missionary and my origin myth”) and messages hidden (steganography) within the stripes of deckchairs (“Encrypted deckchairs”).

EPSRC supports this blog through research grant EP/W033615/1.

“A mob for the Earth”

Online communities and flashmobs supporting the environment and businesses too

One Saturday afternoon one spring in San Francisco, a queue of people stretched down the pavement from a neighbourhood market. There was no shortage of other food shops nearby, so why were hundreds of people waiting to buy everything from crisps to cat litter at this one place? Because that shop had pledged to donate more than a fifth of that day’s profits to improving its environmental footprint.

Pillow fights and parties

The organisation behind the busy shopping day is called Carrotmob. The tactics they used to summon so many people to the tiny market in San Francisco had already been working all over the world for less serious stuff. From a huge pillow fight in New York’s Times Square to a mass disco at Victoria Station in London where people danced along to their MP3 players, the concept of the flashmob can seem to create a party out of thin air. From a simple idea, word can spread over social networking sites, email and word of mouth until a few people have turned into a huge crowd.

Start the bidding

Carrotmob’s founder, Brent Schulkin, wanted to try and entice businesses into going green using a language he thought they’d understand: cash. In return for getting loads of new customers to buy things, the owners had to give back some of their windfall profit to the Earth. To test his idea he went round to food shops in his neighbourhood. He said he could bring lots of extra customers to the shop on a particular day, and asked each of them how much of that day’s profit they’d be willing to spend on making their businesses more environmentally friendly. K&D Market won the bidding war by promising to spend 22% of the profits putting in greener lighting and making their fridges more energy-efficient. Now that K&D had agreed to the deal, Brent had to bring in the punters. He needed a flashmob.

Flashmobs work because it’s now so easy to stay in touch with large numbers of people. If we find out about a cool event we can share it with all our friends just by making one post on sites like Facebook or Twitter. We can make plans to do something as a group just by sending a few texts. When lots of people spread word around like this, suddenly a small idea like Carrotmob, armed with only a website and a few videos, can drop an hour-long queue on the doorstep of a market in San Francisco.

Success!

It’s not easy to enjoy yourself when you’re waiting for an hour to buy a packet of instant noodles, but that’s another advantage of the flashmob: the party atmosphere, the feeling that you’re part of something big. The results were big: the impromptu shoppers brought in more than $9000 – four times what the shop ordinarily rings up on a Saturday afternoon. Lots of the purchases went to a food bank, so even more people shared in the benefits. In the end the shop did well, the Earth did well, and the Carrotmobbers got a party. Plus the good feeling you get from helping the environment probably stays with you longer than the good feeling from getting hit in the face with a pillow.

Paul Curzon, Queen Mary University of London


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Happy World Emoji Day – 📅 17 July 2023 – how people use emoji to communicate and what it tells us about them 😀

“Emoji didn’t become so essential because they stand in for words – but because they finally made writing a lot more like talking.”

Gretchen McCulloch (see Further reading below)
Emoji samples © Emojipedia 2025.

The emoji for ‘calendar‘ shows the 17th July 📅 (click the ‘calendar’ link to find out why) and, since 2014, Emojipedia (an excellent resource for all things emoji, including their history) has celebrated World Emoji Day on that date.

Before we had emoji (the word emoji can be both singular as well as plural, but 'emojis' is fine too) people added text-based 'pictures' to their texts and emails to add flavour to their online conversations, such as 
:-) or :)  - for a smiling face 
:-( or :( - for a sad one.

These text-based pictures are known as ’emoticons’ (icons that add emotion) because it isn’t always possible to know just from the words alone what the writer means. They weren’t just used to clarify meaning though, people started to pepper their prose with other playful pictures, such as :p where the ‘p’ is someone blowing a raspberry / sticking their tongue out* and created other icons such as this rose to send to someone on Valentine’s Day @-‘-,->—-, or this polevaulting amoeba ./

Here are the newly released emoji for 2023.

People use emoji in very different ways depending on their age, gender, ethnicity, personal writing style. In our “The Emoji Crystal Ball” article we look at how people can tell a lot about us from the types of emoji we use and the way we use them.

The Emoji Crystal Ball

Fairground fortune tellers claim to be able to tell a lot about you by staring into a crystal ball. They could tell far more about you (that wasn’t made up) by staring at your public social media profile. Even your use of emojis alone gives away something of who you are. Walid Magdy’s research team … Continue reading

Unicode Poo

The Egyptians had a hieroglyph for it, so unicode has a number for it. There’s even more unicode poo in the emoji character set but the Egyptians got there 1000s of years earlier. Here is how the Ancient Egyptians wrote or carved poo … Continue reading

Further reading


*For an even better raspberry-blowing emoticon try one of the letters (called ‘thorn’) from the Runic alphabet. If you have a Windows computer with a numeric keypad on the right hand side press the Num Lock key at the top to lock the number keypad (so that the keys are now numbers and not up and down arrows etc). Hold down the Alt key (there’s usually one on either side of the spacebar) and while holding it down type 0254 on the numeric keypad and let go. This should now appear wherever your cursor is: þ. Or for the lower case letter it’s Alt+0222 = Þ – for when you just want to blow a small raspberry :Þ

For Mac users press control+command+spacebar to bring up the Character Viewer and just type thorn in the search bar and lots will appear. Double-click to select the one you want, it will automatically paste into wherever your cursor is.


EPSRC supports this blog through research grant EP/W033615/1.

Negligent nurses? Or dodgy digital? – device design can unintentionally mask errors

Magicians often fool their audience into ‘looking over there’ (literally or metaphorically), getting them to pay attention to the wrong thing so that they’re not focusing on what the magician is doing and can enjoy the trick without seeing how it was done. Computers, phones and medical devices let you interact with them using a human-friendly interface (such as a ‘graphical user interface’) which make them easier to use, but which can also hide the underlying computing processes from view. Normally that’s exactly what you want but if there’s a problem, and one that you’d really need to know about, how well does the device make that clear? Sometimes the design of the device itself can mask important information, sometimes the way in which devices are used can mask it too. Here is a case where nurses were blamed but it was later found that the medical devices involved, blood glucose meters, had (unintentionally) tripped everyone up. A useful workaround seemed to be working well, but caused problems later on.

Negligent nurses? Or dodgy digital?

by Harold Thimbleby, Swansea University and Paul Curzon, Queen Mary University of London

It’s easy to get excited about new technology and assume it must make things better. It’s rarely that easy. Medical technology is a case in point, as one group of nurses found out. It was all about one simple device and wearable ID bracelets. Nurses were taken to court, blamed for what went wrong.

The nurses taken to court worked in a stroke unit and were charged with wilfully neglecting their patients. Around 70 others were also disciplined though not sent to court.

There were problems with many nurses’ record-keeping. A few were selected to be charged by the police on the rather arbitrary basis that they had more odd records than the others.

Critical Tests

The case came about because of a single complaint. As the hospital, and then police, investigated, they found more and more oddities, with lots of nurses suddenly implicated. They all seemed to have fabricated their records. Repeatedly, their paper records did not tally with the computer logs. Therefore, the nurses must have been making up the patient records.

The gadget at the centre of the story was a portable glucometer. Glucometers allow the blood-glucose (aka blood sugar) levels of patients to be tested. This matters. If blood-sugar problems are not caught quickly, seriously ill patients could die.

Whenever they did a test, the nurses recorded it in the patient’s paper record. The glucometer system also had a better, supposedly infallible, way to do this. The nurse scanned their ID badge using the glucometer, telling it who they were. They then scanned the patient’s barcode bracelet, and took the patient’s blood-sugar reading. They finally wrote down what the glucometer said in the paper records, and the glucometer automatically added the reading to that patient’s electronic record.

Over and over again, the nurses were claiming in the notes of patients that they had taken readings, when the computer logs showed no reading had been taken. As machines don’t lie, the nurses must all be liars. They had just pretended to take these vital tests. It was a clear case of lazy nurses colluding to have an easy life!

What really happened?

In court, witnesses gave evidence. A new story unfolded. The glucometers were not as simple as they seemed. No-one involved actually understood them, how the system really worked, or what had actually happened.

In reality the nurses were looking after their patients … despite the devices.

The real story starts with those barcode bracelets that the patients wore. Sometimes the reader couldn’t read the barcode. You’ve probably seen this happen in supermarkets. Every so often the reader can’t tell what is being scanned. The nurses needed to sort it out as they had lots of ill patients to look after. Luckily, there was a quick and easy solution. They could just scan their own ID twice. The system accepted this ‘double tapping’. The first scan was their correct staff ID. The second scan was of their staff card ID instead of the patient ID. That made the glucometer happy so they could use it, but of course they weren’t using a valid patient ID.

As they wrote the test result in the patient’s paper record no harm was done. When checked, over 200 nurses sometimes used double tapping to take readings. It was a well-known (at least by nurses), and commonly used, work-around for a problem with the barcode system.

The system was also much more complicated than that anyway. It involved a complex computing network, and a lot of complex software, not just a glucometer. Records often didn’t make it to the computer database for a variety of reasons. The network went down, manually entered details contained mistakes, the database sometimes crashed, and the way the glucometers had been programmed meant they had no way to check that the data they sent to the database actually got there. Results didn’t go straight to the patient record anyway. It happened when the glucometer was docked (for recharging), but they were constantly in use so might not be docked for days. Indeed, a fifth of the entries in the database had an error flag indicating something had gone wrong. In reality, you just couldn’t rely on the electronic record. It was the nurses’ old fashioned paper records that really were the ones you could trust.

The police had got it the wrong way round! They thought the computers were reliable and the nurses untrustworthy, but the nurses were doing a good job and the computers were somehow failing to record the patient information. Worse, they were failing to record that they were failing to record things correctly! … So nobody realised.

Disappearing readings

What happened to all the readings with invalid patient IDs? There was no place to file them so the system silently dropped them into a separate electronic bin of unknowns. They could then be manually assigned, but no way had been set up to do that.

During the trial the defence luckily noticed an odd discrepancy in the computer logs. It was really spiky in an unexplained way. On some days hardly any readings seemed to be taken, for example. One odd trough corresponded to a day the manufacturer said they had visited the hospital. They were asked to explain what they had done…

The hospital had asked them to get the data ready to give to the police. The manufacturer’s engineer who visited therefore ‘tidied up’ the database, deleting all the incomplete records…including all the ones the nurses had supposedly fabricated! The police had no idea this had been done.

Suddenly, no evidence

When this was revealed in court, the judge ruled that all the prosecution’s evidence was unusable. The prosecution said, therefore, they had no evidence at all to present. In this situation, the trial ‘collapses’: the nurses were completely innocent, and the trial immediately stopped.

The trial had already blighted the careers of lots of good nurses though. In fact, some of the other nurses pleaded guilty as they had no memory of what had actually happened but had been confronted with the ‘fact’ that they must have been negligent as “the computers could not lie”. Some were jailed. In the UK, you can be given a much shorter jail sentence, or maybe none at all, if you plead guilty. It can make sense to plead guilty even if you know you aren’t — you only need to think the court will find you guilty. Which isn’t the same thing.

Silver bullets?

Governments see digitalisation as a silver bullet to save money and improve care. It can do that if you get it right. But digital is much harder to get right than most people realise. In the story here, not getting the digital right — and not understanding it — caused serious problems for lots of nurses.

It takes skill and deep understanding to design digital things to work in a way that really makes things better. It’s hard for hospitals to understand the complexities in what they are buying. Ultimately, it’s nurses and doctors who make it work. They have to.

They shouldn’t be automatically blamed when things go wrong because digital technology is hard to design well.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Digital lollipop: no calories, just electronics!

Can a computer create a taste in your mouth? Imagine scrolling down a list of flavours and then savouring your sweet choice from a digital lollipop. Not keen on that flavour, just click and choose a different one, and another and another. No calories, just the taste.

Nimesha Ranasinghe, a researcher at the National University of Singapore is developing a Tongue Mounted Digital Taste Interface, or digital lollipop. It sends tiny electrical signals to the very tip of your tongue to stimulate your taste buds and create a virtual taste!

One of UNESCO’s 2014 ’10 best innovations in the world’, the prototype doesn’t quite look like a lollipop (yet). There are two parts to this sweet sensation, the wearable tongue interface and the control system. The bit you put in your mouth, the tongue interface, has two small silver electrodes. You touch them to the tip of your tongue to get the taste hit. The control system creates a tiny electrical current and a minuscule temperature change, creating a taste as it activates your taste buds.

The prototype lollipop can create sour, salty, bitter, sweet, minty, and spicy sensations but it’s not just a bit of food fun. What if you had to avoid sweet foods or had a limited sense of taste? Perhaps the lollipop can help people with food addictions, just like the e-cigarette has helped those trying to give up smoking?
Perhaps the lollipop can help people with food addictions

But eating is more than just a flavour on your tongue, it is a multi-modal experience, you see the red of a ripe strawberry, hear the crunch of a carrot, feel sticky salt on chippy fingers, smell the Sunday roast, anticipate that satisfied snooze afterwards. How might computers simulate all that? Does it start with a digital lollipop? We will have to wait and see, hear, taste, smell, touch and feel!

Taste over the Internet

The Singapore team are exploring how to send tastes over the Internet. They have suggested rules to send ‘taste’ messages between computers, called the Taste Over Internet Protocol, including a messaging format called TasteXML They’ve also outlined the design for a mobile phone with electrodes to deliver the flavour! Sweet or salt anyone?

Jane Waite, Queen Mary University of London

More on


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Engineering a cloak of invisibility: manipulating light with metamaterials

by Akram Alomainy and Paul Curzon, QMUL

You pull a cloak around you and disappear! Reality or science fiction? Harry Potter’s invisibility cloak is surely Hogwarts’ magic that science can’t match. Even in Harry Potter’s world it takes powerful magic and complicated spells to make it work. Turns out even that kind of magic can be done with a combination of materials science and computer science. Professor Susumu Tachi of the University of Tokyo has developed a cloak made of thousands of tiny beads. Cameras video what is behind you and a computer system then projects the appropriate image onto the front of the cloak. The beads are made of a special material called retro-reflectrum. It is vital to give the image a natural feel – normal screens give too flat a look, losing the impression of seeing through the person. Now you see me, now you don’t at the flick of a switch.

But could an invisibility cloak, without tiny screens on it, ever be a reality? It sounds impossible especially if you understand how light behaves. It bounces off the things around us, travelling in straight lines. You see them when that reflected light eventually reaches your eyes. I can see the red toy over there because red light bounced from it to me. For it to be invisible, no light from it must reach my eyes, while at the same time light from everything else around should. How could that be possible? Akram Alomainy of Queen Mary, University of London, tells us more.

Well maybe things aren’t quite that simple…halls of mirrors, rainbows, polar bears and desert mirages all suggest some odd things can happen with light! They show that manipulating light is possible and that we may even be able to bend it in a way that alters the way things look – even humans.

Light fantastic

Have you ever wondered how the hall of mirrors in a fun fair distorts your reflection? Some make us look short and fat while others make us tall and slim! It’s all about controlling the behaviour of light. The light rays still travel in straight lines, but the mirrors deceive the eye. The light seems to arrive from a different place to reality because the mirrors are curved, not flat, making the light bounce at odd angles.

A rainbow is an object we see that isn’t really there. They occur because white light doesn’t actually exist. It is just coloured light all mixed up. When it hits a surface it separates back into individual colours. The colour of an object you see depends on which colours pass through or get reflected, and which get absorbed. The light is white when it hits the raindrops, but then comes out as the whole spectrum of colours. They head off at slightly different angles, which is why they appear in the different rainbow positions.

What about polar bears? Did you know that they have black skins and semi-transparent hair? You see them as white because of the way the hollow hairs reflect sunlight.

So what does this have to do with invisibility? Well, it suggests that with light all is not as it seems. Perhaps we can manipulate it to do anything we want.

Water! Water!

Now for the clincher – mirages! They show that invisibility cloaks ought to be a possibility. Light from the sun travels in a straight line through the sky. That means we see everything as it is. Except not quite. In places like deserts where the temperature is very high at noon, apparently weird things happen to the light. The difference between the temperature, and thus the difference in density between the higher air layers and the levels closer to the ground can be quite large. That temperature difference makes light coming from the sky change direction as it passes through each layer. It bends rather than just travelling in a straight line to us. It is that image of the sky that looks like the pool of water – the mirage. Our brains assume the light travelled in a straight line, so they misinterpret its location. Now, to make something invisible we just need to make light bend round it. That invisibility cloak is a possibility if we can just engineer what mirages do – bend light!

Nano-machines

That is the basic idea and it is an area of science called ‘transformation optics’ that makes it possible. The science tells us about the properties that each point of an object must have to make light waves travel in any particular way we wish through it. To make it happen engineers must then create special materials with those properties. These materials are known as metamaterials. Their properties are controlled using electromagnetism, which is where the electronic engineers come in! You can think of them as being made of vast numbers of tiny electrical machines built into big human-scale structures. Each tiny machine is able to control how light passes through it, even bending light in a way no natural material could. If the machines are small enough – ‘nanotechnology’ as small as the wavelength of light – and their properties can be controlled really precisely to match the science’s prediction, then we can make light passing through them do anything we want. For invisibility, the aim is to control those properties so the light bends as it passes through a metamaterial cloak. If the light comes out the other side of the cloak unchanged and travelling in the same direction as it entered, while avoiding objects in the middle, then those objects will be invisible.

Now you see it…

Simple cloaking devices that work this way have already been created but they are still very limited. One of the major challenges is the range of light they can work with. At the moment it’s possible to make a cloak that bends a single colour frequency, but not all light. As Yang Hao, a professor working in this area at Queen Mary, notes: “The obstacle engineers face is the complex manufacturing techniques needed to build devices that can bend light across the whole visible light spectrum. However, with the progress being made in nanotechnologies this could become a possibility in the near future”.

Perhaps we should leave the last word to J.K. Rowling: “A suspicious object like that, it was clearly full of Dark Magic.” So while we should appreciate the significance of such an invention we should perhaps be careful about the negative consequences!


More on …

Related Magazines…


EPSRC supports this blog through research grant EP/W033615/1.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Solving problems you care about

by Patricia Charlton and Stefan Poslad, Queen Mary University of London Queen Mary University of London

The best technology helps people solve real problems. To be a creative innovator you need not only to be able to create a solution that works but also to spot a need in the first place and be able to come up with creative solutions. Over the summer a group of sixth formers on internships at Queen Mary had a go at doing this. Ultimately their aim was to build something from a programmable gadget such as a BBC micro:bit or Raspberry Pi. They therefore had to learn about the different possible gadgets they could use, how to program them and how to control the on-board sensors available. They were then given the design challenge of creating a device to solve a community problem.

Hearing the bus is here

Tai Kirby wanted to help visually impaired people. He knew that it’s hard for someone with poor sight to tell when a bus is arriving. In busy cities like London this problem is even worse as buses for different destinations often arrive at once. His solution was a prototype that announces when a specific bus is arriving, letting the person know which was which. He wrote it in Python and it used a Raspberry pi linked to low energy Bluetooth devices.

The fun spell

Filsan Hassan decided to find a fun way to help young kids learn to spell. She created a gadget that associated different sounds with different letters of the alphabet, turning spelling words into a fun, musical experience. It needed two micro:bits and a screen communicating with each other using a radio link. One micro:bit controlled the screen while the other ran the main program that allowed children to choose a word, play a linked game and spell the word using a scrolling alphabet program she created. A big problem was how to make sure the combination of gadgets had a stable power supply. This needed a special circuit to get enough power to the screen without frying the micro:bit and sadly we lost some micro:bits along the way: all part of the fun!

Remote robot

Jesus Esquivel Roman developed a small remote-controlled robot using a buggy kit. There are lots of applications for this kind of thing, from games to mine-clearing robots. The big challenge he had to overcome was how to do the navigation using a compass sensor. The problem was that the batteries and motor interfered with the calibration of the compass. He also designed a mechanism that used the accelerometer of a second micro:bit allowing the vehicle to be controlled by tilting the remote control.

Memory for patterns

Finally, Venet Kukran was interested in helping people improve their memory and thinking skills. He invented a pattern memory game using a BBC micro:bit and implemented in micropython. The game generates patterns that the player has to match and then replicate to score points. The program generates new patterns each time so every game is different. The more you play the more complex the patterns you have to remember become.

As they found you have to be very creative to be an innovator, both to come up with real issues that need a solution, but also to overcome the problems you are bound to encounter in your solutions


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Stretching your keyboard – getting more out of QWERTY

by Jo Brodie, Queen Mary University of London

A QWERTY keyboard after smartphone keyboards starting with keys q w e r t y on the top row
A smartphone’s on-screen keyboard layout, called QWERTY after the first six letters on the top line. Image by CS4FN after smartphone QWERTY keyboards.

If you’ve ever sent a text on a phone or written an essay on a computer you’ve most likely come across the ‘QWERTY’ keyboard layout. It looks like this on a smartphone.

This layout has been around in one form or another since the 1870s and was first used in old mechanical typewriters where pressing a letter on the keyboard caused a hinged metal arm with that same letter embossed at the end to swing into place, thwacking a ribbon coated with ink, to make an impression on the paper. It was quite loud!

The QWERTY keyboard isn’t just used by English speakers but can easily be used by anyone whose language is based on the same A,B,C Latin alphabet (so French, Spanish, German etc). All the letters that an English-speaker needs are right there in front of them on the keyboard and with QWERTY… WYSIWYG (What You See Is What You Get).  There’s a one-to-one mapping of key to letter: if you tap the A key you get a letter A appearing on screen, click the M key and an M appears. (To get a lowercase letter you just tap the key but to make it uppercase you need to tap two keys; the up arrow (‘shift’) key plus the letter).

A French or Spanish speaking person could also buy an adapted keyboard that includes letters like É and Ñ, or they can just use a combination of keys to make those letters appear on screen (see Key Combinations below). But what about writers of other languages which don’t use the Latin alphabet? The QWERTY keyboard, by itself, isn’t much use for them so it potentially excludes a huge number of people from using it.

In the English language the letter A never alters its shape depending on which letter goes before or comes after it. (There are 39 lower case letter ‘a’s and 3 upper case ‘A’s in this paragraph and, apart from the difference in case, they all look exactly the same.) That’s not the case for other languages such as Arabic or Hindi where letters can change shape depending on the adjacent letters. With some languages the letters might even change vertical position, instead of being all on the same line as in English.

Early attempts to make writing in other languages easier assumed that non-English alphabets could be adapted to fit into the dominant QWERTY keyboard, with letters that are used less frequently being ignored and other letters being simplified to suit. That isn’t very satisfactory and speakers of other languages were concerned that their own language might become simplified or standardised to fit in with Western technology, a form of ‘digital colonialism’.

But in the 1940s other solutions emerged. The design for one Chinese typewriter avoided QWERTY’s ‘one key equals one letter’ (which couldn’t work for languages like Chinese or Japanese which use thousands of characters – impossible to fit onto one keyboard, see picture at the end!).

Rather than using the keys to print one letter, the user typed a key to begin the process of finding a character. A range of options would be displayed and the user would select another key from among them, with the options narrowing until they arrived at the character they wanted. Luckily this early ‘retrieval system’ of typing actually only took a few keystrokes to bring up the right character, otherwise it would have taken ages.

This is a way of using a keyboard to type words rather than letters, saving time by only displaying possible options. It’s also an early example of ‘autocomplete’ now used on many devices to speed things up by displaying the most likely word for the user to tap, which saves them typing it.

For example in English the letter Q is generally* always followed by the letter U to produce words like QUAIL, QUICK or QUOTE. There are only a handful of letters that can follow QU – the letter Z wouldn’t be any use but most of the vowels would be. You might be shown A, E, I or O and if you selected A then you’ve further restricted what the word could be (QUACK, QUARTZ, QUARTET etc).

In fact one modern typing system, designed for typists with physical disabilities, also uses this concept of ‘retrieval’, relying on a combination of letter frequency (how often a letter is used in the English language) and probabilistic predictions (about how likely a particular letter is to come next in an English word). Dasher is a computer program that lets someone write text without using a keyboard, instead a mouse, joystick, touchscreen or a gaze-tracker (a device that tracks the person’s eye position) can be used.

Letters are presented on-screen in alphabetic order from top to bottom on the right hand side (lowercase first, then upper case) and punctuation marks. The user ‘drives’ through the word by first pushing the cursor towards the first letter, then the next possible set of letters appear to choose from, and so on until each word is completed. You can see it in action in this video on the Dasher Interface.

Key combinations

The use of software to expand the usefulness of QWERTY keyboards is now commonplace with programs pre-installed onto devices which run in the background. These IMEs or Input Method Editors can convert a set of keystrokes into a character that’s not available on the keyboard itself. For example, while I can type SHIFT+8 to display the asterisk (*) symbol that sits on the 8 key there’s no degree symbol (as in 30°C) on my keyboard. On a Windows computer I can create it using the numeric keypad on the right of some keyboards, holding down the ALT key while typing the sequence 0176. While I’m typing the numbers nothing appears but once I complete the sequence and release the ALT key the ° appears on the screen.

English language keyboard image by john forcier from Pixabay, showing the numeric keypad highlighted in yellow with the two Alt keys and the 'num lock' key highlighted in pink. Num lock ('numeric lock') needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional 'alt code' characters.
English language keyboard image by john forcier from Pixabay highlighted by CS4FN, showing the numeric keypad highlighted in yellow with the two Alt keys and the ‘num lock’ key highlighted in pink. Num lock (‘numeric lock’) needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional ‘alt code‘ characters.

When Japanese speakers type they use the main ‘ABC’ letters on the keyboard, but the principle is the same – a combination of keys produces a sequence of letters that the IME converts to the correct character. Or perhaps they could use Google Japan’s April Fool solution from 2010, which surrounded the user in half a dozen massive keyboards with hundreds of keys a little like sitting on a massive drum kit!

*QWERTY is a ‘word’ which starts with a Q that’s not followed by a U of course…

Watch …

More on …

The ‘retrieval system’ of typing mentioned above, which lets the user get to the word or characters more quickly, is similar to the general problem solving strategy called ‘Divide and Conquer’. You can read more about that and other search algorithms in our free booklet ‘Searching to Speak‘ (PDF) which explores how the design of an algorithm could allow someone with locked-in syndrome to communicate. Locked-in syndrome is a condition resulting from a stroke where a person is totally paralysed. They can see, hear and think but cannot speak. How could a person with Locked-in syndrome write a book? How might they do it if they knew some computational thinking?


EPSRC supports this blog through research grant EP/W033615/1.

A machine wrote this post – OpenAi’s ChatGPT

Robot emerging from a laptop screen
ChatGPT image AI Generated by Alexandra_Koch from Pixabay

Blog post by OpenAI’s ChatGPT. Introduction by Jo Brodie

1. Jo wrote this…

ChatGPT is an AI chatbot which can have a conversation with you using everyday (natural) language. It has access to the huge amounts of digital information found on the internet and its ability to use language to write good answers in response to questions has been tested and improved by giving it feedback. It learns, and refines its output.

You don’t need to programme it, you can just ask it a question (giving it a prompt) and, after a moment’s ‘thinking’ it will produce an answer. Lots of people have been testing ChatGPT by asking it questions that we already know the answer to, to see how well it can do. It’s actually quite impressive! If you think it’s made a mistake or misunderstood your prompt you can reword your question or give it more information to help it improve its answer.

Can ChatGPT write an article that’s suitable for our magazine or website?

I gave ChatGPT the following prompt: “Please write a 200 to 500 word article suitable for the CS4FN magazine” and moments later it generated a completely new article about machine learning. That told me it knew (or at least could find out) that CS4FN has something to do with computer science. I’m not surprised that it picked machine learning as a topic given that it is a machine that can learn. I didn’t tell it what to write about, it came up with that idea itself.

The article it gave me contained some technical words and not all of them were explained, so I asked it to rewrite the article for a younger audience (who might not have come across some of the words yet).

The result is below – what do you think?

2. ChatGPT wrote this…

From Logic to Learning: The Incredible Advancements of Artificial Intelligence

Artificial Intelligence (AI) has come a long way since it was first invented in the 1950s. At the beginning, AI was based on logic and rules. Experts in a specific field would give the computer a set of instructions to help it make decisions. But, these rules were limited and couldn’t handle complex situations.

Today, AI has seen a huge shift towards machine learning. This means that instead of following rules, AI learns from data. This shift happened because computers got stronger, and more data became available. The breakthroughs in machine learning algorithms helped too.

Machine learning has two categories: supervised learning and unsupervised learning. In supervised learning, the computer learns from labeled data. This means that the correct answer is given to the computer for each input. Once the computer has learned enough, it can predict the answer for new data. Supervised learning is useful for applications like image and speech recognition.

In unsupervised learning, the computer learns from unlabeled data. This means that there is no correct answer provided. The goal of unsupervised learning is to find patterns in the data. This type of learning is useful in applications like anomaly detection and clustering.

Deep learning is a special type of machine learning that uses neural networks. Neural networks are inspired by the human brain and have layers of connected nodes. Each node applies a mathematical function to its inputs and sends the result to the next layer of nodes. Deep learning has revolutionized AI, leading to breakthroughs in areas like natural language processing, computer vision, and game playing.

Even though machine learning has achieved so much, there are still challenges to overcome. One challenge is the lack of transparency in machine learning models. Unlike rule-based systems, it can be hard to understand how a machine learning model made a decision. This can be a problem, especially in areas like healthcare where the decision-making process needs to be trustworthy and explainable.

Another challenge is the need for a lot of data to train machine learning models. While data is easy to find in some areas, like social media, it can be hard to come by in other areas like medical imaging. This lack of data can make machine learning less effective in some applications.

In conclusion, AI has come a long way since the 1950s. We’ve gone from rule-based systems to machine learning, which has led to breakthroughs in many areas. Although there are still challenges to overcome, the future of AI is exciting!

The first attempt is below…

The Evolution of Artificial Intelligence: From Rule-Based to Machine Learning

Artificial Intelligence (AI) has come a long way since its inception in the 1950s. Initially, AI was developed based on rule-based systems, where experts in a specific domain would provide rules to the system to help it make decisions. While these systems worked well in some areas, they were limited in their ability to handle complex problems and adapt to new situations.

In recent years, AI has seen a significant shift towards machine learning, where systems learn from data rather than relying on pre-defined rules. This shift has been driven by advances in computer hardware, the availability of large amounts of data, and breakthroughs in machine learning algorithms.

Machine learning has two main categories: supervised learning and unsupervised learning. In supervised learning, the system is trained on labeled data, where the correct answer is provided for each input. The system uses this labeled data to learn a function that maps inputs to outputs. Once trained, the system can make predictions on new, unseen data. This approach has been successful in a wide range of applications, such as image and speech recognition.

In unsupervised learning, the system is trained on unlabeled data, where no correct answer is provided. The goal of unsupervised learning is to discover patterns and structure in the data. This approach has been useful in areas such as anomaly detection and clustering.

Deep learning is a subset of machine learning that uses neural networks to learn from data. Neural networks are inspired by the structure of the human brain and consist of layers of interconnected nodes. Each node applies a mathematical function to its inputs and passes the result to the next layer of nodes. Deep learning has revolutionized the field of AI, enabling breakthroughs in areas such as natural language processing, computer vision, and game playing.

Despite the many successes of machine learning, there are still limitations and challenges to overcome. One challenge is the lack of interpretability of machine learning models. Unlike rule-based systems, it can be difficult to understand how a machine learning model arrived at a particular decision. This lack of transparency can be problematic in areas such as healthcare, where the decision-making process needs to be explainable and trustworthy.

Another challenge is the need for large amounts of data to train machine learning models. While data is abundant in some domains, such as social media, it can be scarce in others, such as medical imaging. This scarcity of data can limit the effectiveness of machine learning approaches in certain applications.

In conclusion, the evolution of AI from rule-based systems to machine learning has enabled breakthroughs in many areas and holds tremendous potential for the future. However, there are still challenges to overcome, particularly in the areas of interpretability and data scarcity. Addressing these challenges will be critical to realizing the full potential of AI in the years to come.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos