Photogrammetry for fun, preservation and research – digitally stitching together 2D photographs to visualise the 3D world.

Composite image of one green glass bottle made from three photographs. Image by Jo Brodie
Composite image of one green glass bottle made from three photographs. Image by Jo Brodie

Imagine you’re the costume designer for a major new film about a historical event that happened 400 years ago. You’d need to dress the actors so that they look like they’ve come from that time (no digital watches!) and might want to take inspiration from some historical clothing that’s being preserved in a museum. If you live near the museum, and can get permission to see (or even handle) the material that makes it a bit easier but perhaps the ideal item is in another country or too fragile for handling.

This is where 3D imaging can help. Photographs are nice but don’t let you get a sense of what an object is like when viewed from different angles, and they don’t really give a sense of texture. Video can be helpful, but you don’t get to control the view. One way around that is to take lots of photographs, from different angles, then ‘stitch’ them together to form a three dimensional (3D) image that can be moved around on a computer screen – an example of this is photogrammetry.

In the (2D) example above I’ve manually combined three overlapping close-up photos of a green glass bottle, to show what the full size bottle actually looks like. Photogrammetry is a more advanced version (but does more or less the same thing) which uses computer software to line up the points that overlap and can produce a more faithful 3D representation of the object.

In the media below you can see a looping gif of the glass bottle being rotated first in one direction and then the other. This video is the result of a 3D ‘scan’ made from only 29 photographs using the free software app Polycam. With more photographs you could end up with a more impressive result. You can interact with the original scan here – you can zoom in and turn the bottle to view it from any angle you choose.

A looping gif of the 3D Polycam file being rotated one way then the other. Image by Jo Brodie

You might walk around your object and take many tens of images from slightly different viewpoints with your camera. Once your photogrammetry software has lined the images up on a computer you can share the result and then someone else would be able to walk around the same object – but virtually!

Photogrammetry is being used by hobbyists (it’s fun!) but is also being used in lots of different ways by researchers. One example is the field of ‘restoration ecology’ in particular monitoring damage to coral reefs over time, but also monitoring to see if particular reef recovery strategies are successful. Reef researchers can use several cameras at once to take lots of overlapping photographs from which they can then create three dimensional maps of the area. A new project recently funded by NERC* called “Photogrammetry as a tool to improve reef restoration” will investigate the technique further.

Photogrammetry is also being used to preserve our understanding of delicate historic items such as Stuart embroideries at The Holburne Museum in Bath. These beautiful craft pieces were made in the 1600s using another type of 3D technique. ‘Stumpwork’ or ‘raised embroidery’ used threads and other materials to create pieces with a layered three dimensional effect. Here’s an example of someone playing a lute to a peacock and a deer.

Satin worked with silk, chenille threads, purl, shells, wood, beads, mica, bird feathers, bone or coral; detached buttonhole variations, long-and-short, satin, couching, and knot stitches; wood frame, mirror glass, plush”, 1600s. Photo CC0 from Metropolitan Museum of Art uploaded by Pharos on Wikimedia.

A project funded by the AHRC* (“An investigation of 3D technologies applied to historic textiles for improved understanding, conservation and engagement“) is investigating a variety of 3D tools, including photogrammetry, to recreate digital copies of the Stuart embroideries so that people can experience a version of them without the glass cases that the real ones are safely stored in.

Using photogrammetry (and other 3D techniques) means that many more people can enjoy, interact with and learn about all sorts of things, without having to travel or damage delicate fabrics, or corals.

*NERC (Natural Environment Research Council) and AHRC (Arts and Humanities Research Council) are two organisations that fund academic research in universities. They are part of UKRI (UK Research & Innovation), the wider umbrella group that includes several research funding bodies.

Other uses of photogrammetry

Examples of cultural heritage and ecology are highlighted in the post but also interactive games (particularly virtual reality), engineering and crime scene forensics and the film industry use photogrammetry, an example is Mad Max: Fury Road which used the technique to create a number of its visual effects. Hobbyists also create 3D versions (called ‘3D assets’) of all sorts of objects and sell these to games designers to include in their games for players to interact with.

Careers

This was an example job advert (since closed) for a photogrammetry role in virtual reality.

Further reading

Other CS4FN posts about the use of 3D imaging

“The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail.”

See also our collection of Computer Science & Research posts.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Music & Computing: TouchKeys: getting more from your keyboard

By Jo Brodie and Paul Curzon, Queen Mary University of London

Even if you’re the best keyboard player in the world the sound you can get from any one key is pretty much limited to ‘loud’ or ‘soft’, ‘short’ or ‘long’ depending on how hard and how quickly you press it. The note’s sound can’t be changed once the key is pressed. At best, on a piano, you can make it last longer using the sustain pedal. A violinist, on the other hand, can move their finger on the string while it’s still being played, changing its pitch to give a nice vibrato effect. Wouldn’t it be fun if keyboard players could do similar things.

Andrew McPherson and other digital music researchers at QMUL and Drexel University came up with a way to give keyboard performers more room to express themselves like this. TouchKeys is a thin plastic coating, overlaid on each key of a keyboard, but barely noticeable to the keyboard player. The coating contains sensors and electronics that can change the sound when a key is touched. The TouchKeys’ electronics connect to the keyboard’s own controller and so changes the sounds already being made, expanding the keyboard’s range. This opens up a whole world of new sonic possibilities to a performer.

The sensors can follow the position and movement of your fingers and respond appropriately in real-time, extending the range of sounds you can get from your keyboard. By wiggling your finger from side-to-side on a key you can make a vibrato effect, or you change the note’s pitch completely by sliding your finger up and down the key. The technology is similar to a phone’s touchscreen where different movements (‘gestures’) make different things happen. An advantage of the system is that it can easily be applied to a keyboard a musician already knows how to play, so they’ll find it easy to start to use without having to make big changes to their style of playing.

They wanted to get TouchKeys out of the lab and into the hands of more musicians, so teamed up with members of London’s Music Hackspace community, who run courses in electronic music, to create some initial versions for sale. Early adopters were able to choose either a DIY kit to add to their own keyboard, wire up and start to play, or choose a ready-to-play keyboard with the TouchKeys system already installed.

The result is that lots of musicians are already using TouchKeys to get more from their keyboard in exciting new ways.


Earlier this year Professor Andrew McPherson gave his inaugural lecture (a public lecture given by an academic who has been promoted) at Imperial College London where he is continuing his research. You can watch his lecture – Making technology to make music – below.

Further reading

Andrew McPherson’s work on the Bela platform

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos



Joyce Weisbecker: a teenager the first indie games developer?

CS4FN Banner

by Paul Curzon, Queen Mary University of London

Video games were once considered to be only of interest to boys, and the early games industry was dominated by men. Despite that, a teenage girl, Joyce Weisbecker, was one of the pioneers of commercial game development.

Originally, video games were seen as toys for boys. Gradually it was realised that there was a market for female game players too, if only suitably interesting games were developed, so the games companies eventually started to tailor games for them. That also meant, very late in the day, they started to employ women as games programmers. Now it is a totally normal thing to do. However, women were also there from the start, designing games. The first female commercial programmer (and possibly first independent developer) was Joyce Weisbecker. Working as an independent contractor she wrote her first games for sale in 1976 for the RCA Studio II games console that was released in January 1977.

RCA Studio II video games console
Image by WikimediaImages from Pixabay

Joyce was only a teenager when she started to learn to program computers and wrote her first games. She learnt on a computer that her engineer father designed and built at home called FRED (Flexible Recreational and Educational Device). He worked for RCA (originally the Radio Corporation of America), one of the major electronics, radio, TV and record companies of the 20th century. The company diversified their business into computers and Joyce’s father designed them for RCA (as well as at home for a hobby). He also invented a programming language called CHIP-8 that was used to program the RCA computers. This all meant Joyce was in a position to learn CHIP-8 and then to write programs for RCA computers including their new RCA Studio II games console before the machine was released, as a post-high school summer job.

The code for two games that she wrote in 1976, called Snake Race and Jackpot, were included in the manual for an RCA microcomputer called the COSMAC VIP, and she also wrote more programs for it the following year. These computers came in kit form for the buyer to build themselves. Her programs were example programs included for the owner to type in and then play once they had built the machine. Including them meant their new computer could do something immediately.

She also wrote the first game that she was paid for in that Summer of 1976. It was for the RCA Studio II games console, and it earned her $250 – well over $1000 in today’s money, so worth having for a teenager who would soon be going on to college. It was a quiz program, called TV School House I. It pitted two people against each other, answering questions on topics such as maths, history and geography, with two levels of difficulty. Questions were read from question booklets and whoever typed in the multiple choice answer number the fastest got the points for a question, with more points the faster they were. There is currently a craze for apps that augment physical games and this was a very early version of the genre.

Speedway screen from Wikimedia

She quickly followed it with racing and chase games, Speedway and Tag, though as screens were still very limited then, with only tiny screens, the graphics of all these games were very, very simple – eg racing rectangles around a blocky, rectangular racing track.

Unfortunately, the RCA games console itself was a commercial failure as it couldn’t compete with consoles like the Atari 2600, so RCA soon ended production. Joyce, meanwhile, retired from the games industry, still a teenager, ultimately becoming a radar signal processing engineer.

While games like Pong had come much earlier, the Atari 2600, which is credited with launching the first video game boom, was released in 1977, with Space Invaders, one of the most influential video games of all time, released in 1980. Joyce really was at the forefront of commercial games design. As a result her papers related to games programming, including letters and program listings, are now archived in the Strong National Museum of Play in New York.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Happy #WorldEmojiDay 2024 – here’s an emoji film quiz & some computer science history

Emoji! 💻 😁

World Emoji Day is celebrated on the 17th of July every year (why?) and so we’ve put together a ‘Can you guess the film from the emoji’ quiz and added some emoji-themed articles about computer science and the history of computing.

  1. An emoji film quiz
  2. Emoji accessibility, and a ‘text version’ of the quiz
  3. Computer science articles about emoji

Emoji are small digital pictures that behave like text – you can slot them easily them in sentences (you don’t have to ‘insert an image’ from a file or worry about the picture pushing the text out of the way). You can even make them bigger or smaller with the text (🎬 – compare the one in the section title below). People use them as a quick way of sharing a thought or emotion, or adding a comment like a thumbs up so they’re (sort of) a form of data representation. Even so, communication with emoji can be just as tricky, in terms of being misunderstood, just as with using words alone. Different age groups might read the same emoji and understand something quite different from it. What do you think 🙂 (‘slightly smiling face’ emoji) means? What do people older or younger than you think it means? Lots of people think it means “I’m quite happy about this” but others use it in a more sarcastic way.

1. An emoji film quiz 🎬

You can view the quiz online or download and print from Word or PDF versions. If you’re in a classroom with a projector the PowerPoint file is the one you want.

More Computational Thinking Puzzles

2. Emoji accessibility, and a text version of the quiz

We’ve included a text version for blind or visually impaired people which can either be read out by someone or by a screen reader. Use the ‘Text quiz’ files in Word or PDF above.

More generally, when people share photographs and other images on social media it’s helpful if they add some information about the image to the ‘Alt Text’ (alternative text) box. This tells people who can’t easily see the image what’s in the picture. Screenreaders will also tell people what the emojis are in a tweet or text message, but if you use too many… it might sound like this 😬.

3. Computer science articles about emoji

This next article is about the history of computing and the development of the graphical icons for apps that started life being drawn on gridded paper by Susan Kare. You could print some graph / grid paper and design your own!

A copy of this post can also be found as a permanent page at https://cs4fn.blog/emoji/


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Art Touch and Talk Tour Tech

CS4FN Banner

by Paul Curzon, Queen Mary University of London

.What could a blind or partially-sighted person get from a visit to an art gallery? Quite a lot if the art gallery puts their mind to it. Even more if they make use of technology. So much so, we may all want the enhanced experience.

A sculpture of a head and shouldrers, heavily textured with a network of lines and points
Image by NoName_13 from Pixabay

The best art galleries provide special tours for blind and partially-sighted people. One kind involves a guide or curator explaining paintings and other works of art in depth. It is not exactly like a normal guided tour that might focus on the history or importance of a painting. The best will give both an overview of the history and importance whilst also giving a detailed description of the whole picture as well as the detail, emphasising how each part was painted. They might, for example, describe the brush strokes and technique as well as what is depicted. They help the viewer create a really detailed mental model of the painting.

One visually-impaired guide who now gives such tours at galleries such as Tate Britain, Lisa Squirrel, has argued that these tours give a much deeper and richer understanding of the art than a normal tour and certainly more than someone just looking at the pictures and reading the text as they wander around. Lisa studied Art History at university and before visiting a gallery herself reads lots and lots about the works and artists she will visit. She found that guided tours by sighted experts using guided hand movements in front of a painting helped her build really good internal models of the works in her mind. Combined with her extensive knowledge from reading, she wasn’t building just a picture of the image depicted but of the way it was painted too. She gained a deep understanding of the works she explored including what was special about them.

The other kind of tour art galleries provide is a touching tour. It involves blind and partially-sighted visitors being allowed to touch selected works of art as part of a guided tour where a curator also explains the art. Blind art lover, Georgina Kleege, has suggested that touch tours give a much richer experience than a normal tour, and should also be put on for all for this reason. It is again about more than just feeling the shape and so “working out its form that”seeing” what a sighted person would take in at a glance. It is about gaining a whole different sensory experience of the work: its texture, for example, not a lesser version just of what it looks like.

How might technology help? Well, the company, NeuroDigital Technologies, has developed a haptic glove system for the purpose. Haptic gloves are gloves that contain vibration pads that stimulate the skin of the person in different, very fine ways so as to fool the wearer’s brain into thinking it is touching things of different shapes and textures. Their system has over a thousand different vibration patterns to simulate different feelings of touching surfaces. They also contain sensors that determine the precise position of the gloves in space as the person moves their hands around.

The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail. More than that the team could augment the experience to give enhanced feelings in places in shadow, for example, or to emphasise different parts of the work.

A similar system could be applied to historical artifacts too: allowing people to “feel” not just see the Rosetta Stone, for example. Perhaps it could also be applied to paintings to allow a person to feel the brush strokes in a way that could just not otherwise be done. This would give an enhanced version of the experience Lisa felt was so useful of having her hand guided in front of a painting and the brush strokes and areas being described. Different colours might also be coded with different vibration patterns in this way allowing a series of different enhanced touch tours of a painting, first exploring its colours, then its brush strokes, and so on.

What about talking tours? Can technology help there? AIs can already describe pictures, but early versions at least were trained on the descriptions people have given to images on the Internet: “a black cat sitting on top of the TV looking cute”, The Mona Lisa: a young woman staring at you”. That in itself wouldn’t cut it. Neither would training the AI on the normal brief descriptions on the gallery walls next to works of art. However, art books and websites are full of detail and more recent AIs can give very detailed descriptions of art works if asked. These descriptions include what the picture looks like overall, the components, colours, brushstrokes and composition, symbolism, historical context and more (at least for famous paintings). With specific training from curators and art historians the AIs will only get better. What is still missing for a blind person though from the kind of experience Lisa has when experiencing painting with a guide, is the link to the actual picture in space – having the guide move her hand in front of the painting as the parts are described. However, all that is needed to fill that gap is to combine a chat-based AI with a haptic glove system (and provide a way to link descriptions to spatial locations on the image). Then, the descriptions can be linked to positions of a hand moving in space in front of a virtual version of the picture. Combine that with the kind of system already invented to help blind people navigate, where vibrations on a walking stick indicate directions and times to turn, and the gloves can then not only give haptic sensations of the picture in front of the picture or sculpture, but also guide the person’s movement over it.

Whether you have such an experience in a gallery, in front of the work of art, or in your own front room, blind and partially sighted people could soon be getting much better experiences of art than sighted people. At which point, as Georgina Kleege, suggested for normal touch tours, everyone else will likely want the full “blind” experience too.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Accessible Technology in the Voting Booth

CS4FN Banner

by Daniel Gill, Queen Mary University of London

Voting at an election: people deposting their voting slip
Image AI generated by Vilius Kukanauskas from Pixabay

On Thursday 4th July 2024, millions of adults around the UK went to their local polling station to vote for their representative in the House of Commons. However, for the 18% of adults who have a disability, this can be considerably more challenging. While the right of voters to vote independently and secretly is so important, many blind and partially sighted people cannot do so without assistance. Thankfully this is changing, and this election was hailed as the most accessible yet. So how does technology enable blind and partially sighted people to vote independently?

 There are two main challenges when it comes to voting for blind and partially sighted people. The names of candidates are listed down the left-hand side, so firstly, a voter needs to find the row of the person who they want to vote for. They then, secondly, need to put a cross in the box to the right. The image below gives an example of what the ballot paper looks like:

A mock up of a "CS4FN" voting slip with candidates
HOPPER, Grace
TURING, Alan Mathison
BENIOFF, Paul Anthony
Lovelace, Ada

To solve the first problem, we can turn to audio. An audio device can be used to play a recording of the candidates as the appear on the ballot paper. Some charities also provide a phone number to call before the election, with a person who can read this list out. This is great, of course, but it does rely on the voter remembering the position of the person that they want to vote for. A blind or partially sighted voted is also allowed to use a text reader device, or perhaps a smart phone with a special app, to read out what is on the ballot paper in the booth.

Lots of blind and partially impaired people are able to read braille: a way of representing English words using bumps on the paper (read more about braille in this CS4FN article). One might think that this would solve all the problems, but, in fact, there is a requirement that all the ballot papers for each constituency have a standard design to ensure they can be counted efficiently and without error.

The solution to the second problem is far more practical: the excitingly named tactile voting device. This is a simple plastic device which is placed on top of the ballot paper. Each of the boxes on the ballot paper (as shown to the right of the image above), has a flap above it with its position number embossed on it. When the voter finds the number of the person they want to vote for, they simply turn over the flap, and are guided by a perfectly aligned square guide to where the box is. The voter can then use that guide to draw the cross in the box.

This whole process is considerably more complicated than it is for those without disabilities – and you might be thinking, “there must be an easier way!” Introducing the McGonagle Reader (MGR)! This device combines both solutions into one device that can be used in the voting booth. Like the tactile voting device, it has flaps which cover each of the boxes for drawing the cross. But, next to those, buttons, which, when pressed, read out the information of the candidate for that row. This can save lots of time, removing the need to remember the position of each candidate – a voter can simply go down the page and find who they want to vote for and turn over the correct flap.

When people have the right to vote, it is especially important to ensure that they have the ability to use that right. This means that no matter the cost or the logistics, everyone should have access to the tools they need to vote for their representative. Progress is now being made but a lot more work still needs to be done.

To help ensure this happens in future, the RNIB want to know the experiences of those who voted or didn’t vote in the UK 2024 general election – see the survey linked from the RNIB page here.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

The basics of Quantum Computing: Qubits

CS4FN Banner

by Paul Curzon, Queen Mary University of London

An eye looking at two blue spheres
Image by Gerd Altmann from Pixabay

Reality is weird, very weird. The first thing you have to do to understand the reality of reality is to drop your common sense. Only then can you start to understand it especially when it comes to the quantum world of the very small. Our brains evolved to naturally make sense of human scale things, rather than the very large or very small. Accept the weirdness, though, and there are lots of opportunities, especially for computer scientists. That is why it is now an exciting area of research with theoretical physicists, engineers and computer scientists working together to make progress.

Not common sense

Imagine you are a person trying to understand the world a thousand years ago. Clearly the world MUST be flat. It looks flat and suggesting you are standing on a sphere is just ridiculous. People living on the other side would obviously fall off if it was a sphere! Except the world is a sphere and people in Australia (or Europe if you are Australian) don’t fall off. Common sense doesn’t work (until you understand how gravity works). That’s why science is so powerful. Common Sense also doesn’t work for understanding the reality of the very small. This branch of physics, quantum physics, is very important for computer scientists not only because the building blocks of our computers are becoming ever smaller, but because when you get so small that the laws of quantum physics matter, computers can work in new, exciting ways, ways that are far better than our current computers.

Bits and qubits

Let’s start with binary, the fundamental way we represent information in a computer. The basic building block of information is the bit. A bit is something that can have one of two states. It can be a 1 or a 0. That means a bit can store some information. These two states of 1 and 0 might be physically represented in lots of ways, such as a high voltage stored versus a low voltage stored, or a pulse of light versus no pulse of light, or someone’s hand up versus their hand down. If you have two bits then you can store one of 4 pieces of information in them because of the possible combinations (00, 01, 10 and 11), with three bits and you can store 8 different things. Those collections of bits can then stand for different numbers (that is all binary is), and by building big circuits from simple basic circuits that do simple manipulations on bits (i.e., logic gates) we can do ever more complex calculations with them and ultimately everything our current computers are capable of.

The spin of an electron

A pedestrian light showing green/walk
Image by Hans from Pixabay

Bits can be represented by anything that has 2 states. So suppose you want to represent your bits using something really small like electrons. Electrons have a property called spin. You can imaging them as spinning balls of charge (though they are not exactly spinning like a spinning ball … electrons aren’t balls and they aren’t actually rotating in the normal sense – remember reality is weird so these analogies are just there to help give an idea, but it is never as simple as that). Now, electrons can “spin” in exactly one of two ways called spin up and spin down. There are only two possible kinds of spin because in the quantum world things come in discrete amounts, not continuous ones. They jump from one state to another, like a pedestrian (walk/don’t walk) traffic light going from red to green instantly) rather than gradually changing between them (such as the way a car gradually speeds up to the speed limit). An electron is either spin up or spin down, like the pedestrian lights, never something in between.

Now, it is possible to set the spin of an electron and to measure whether it has spin that is spin up or spin down, so an electron can, in principle, be used to store a binary bit given it has two states (spin up for 1 and spin down for 0, say). However, this is where weirdness really comes in. It turns out that it is possible for an electron to be both spin up and spin down at once as long as the spin is not measured, due to the way the quantum world works. A quantum pedestrian light doing a similar thing would have only one light that could be red or green. However, it would be both red and green at the same time UNTIL someone looked at it to see which state it was in (so measured the state). At that point they would become, and the person would only see, one colour or the other. This is called quantum superposition. To understand this it is better to think about reality being about probabilities not certainties. Imagine that the electron is like a tossed coin that is still in the air. It has a probability of being Heads and of being Tails. Only when it lands (so is measured) is it actually one or the other. An electron is combining both possibilities until the spin is measured.

The quantum tortoise and the hare

You may have the quaint idea that reality is made of sub-atomic particles (like electrons or protons) that are solid little bits of matter that are very ball like and exist in one place at any given time. Actually they aren’t like that at all. It is better to think of particles as just having probabilities of being at one place or another – they are kind of smeared across space, everywhere at once, like a ripple pattern across a pond, just with different probabilities of actually being in any place when their position is measured. When you do measure their position you find they definitely are in one place or another, appearing to be a particle again, not a wave.

It may help to think of this in terms of watching slow moving tortoises and fast moving hares passing you as they race. The position of a slow moving tortoise you see wander by is easy to call: it has a very high probability to be in a particular place. The position of a fast moving hare that whizzes past is much harder to call: it has a far lower probability to be in a given place at any time. However, without looking you can’t tell. You just know the probabilities. Of course with particles it isn’t exactly like that just as an electron’s spin isn’t exactly like a ball spinning. It is only when a particle’s position is actually checked (i.e. measured) that it is definitely at a known place and that smeared probability collapses to certainty. A quantum tortoise and hare racing past would be in all possible positions round the race track just with different probabilities. Suppose you only checked (so measured their position) at the finish line. It is only because of that measurement that the probabilities of where they were through the race turn into specific measured so known positions with a quantum hare or a quantum tortoise having actually won.

This weirdness is linked to the fact that the fundamental components that reality is made up of are both particles in given places (think of an electron or a proton) and waves passing through space (think of light or ripples in a pond) at the same time. So light behaves like a particle and like a wave. Similarly, an electron does too. 

Electron spin as Qubits

Other properties of sub-atomic particles act in the same way as a particle’s position being smeared across lots of possibilities at once. This includes the spin of an electron. Until it is measured, an electron is superposed in both a spin up and spin down state at the same time (spinning both ways at once!): there is just a probability that the electron is in each state, it isn’t actually definitely in either. That means as long as you do not measure its spin, the electron as a device storing a piece of information is storing both 1 and 0 at the same time, each with a given probability. As such it behaves differently to an actual bit which must be either 1 or 0. We therefore call such an electron-based storage a qubit rather than a bit. 

In theory, we can do computations on qubits manipulating and combining them in simple ways using the quantum equivalent of logic gates. Once we have created quantum logic gates to do simple manipulations, we can combine those gates into bigger and bigger circuits that do more complicated quantum calculations. As long as the states of the qubits are not measured all the states through the circuit are superposed in both states with particular probabilities. Unlike a normal circuit which does one series of computations based on its inputs, these quantum circuits are in effect doing all possible computations of that circuit at once. It is only when we measure the answer at the output, say, that the qubits in the circuit are fixed at either 1 or 0 and an actual result is delivered. This is like the tortoise and hare being everywhere (whatever racing strategy they followed) with some probability until we measure the result at the finish line (the output of the race). Because all states existed at once, lots of computation exists simultaneously, this means that such a circuit can, in theory, and with the right algorithms, deliver answers far, far faster than a conventional circuit could possibly do, given the latter can only do one computation at a time,

From theory to practice

That is the theory, and it is gradually being realised in practice. Qubits can be created and their values changed. Various quantum logic gates have also now been invented and so small quantum computers do now exist. Quantum algorithms to do certain tasks quickly have been invented. Since the original ideas were mooted, progress has been relatively slow, but now that the ideas have been shown to work in practice, more and more is being achieved, making it an exciting time to be doing quantum computing research.

More on …

  • Quantum Computing (to come)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Gutta-Percha: how a tree launched a global telecom revolution

CS4FN Banner

by Paul Curzon, Queen Mary University of London

(from the archive)

Rubber tree being tapped
Image  from Pixabay

Obscure plants and animals can turn out to be surprisingly useful. The current mass extinction of animal and plant species needs to be stopped for lots of reasons but an obvious one is that we risk losing forever materials that could transform our lives. Gutta-percha is a good example from the 19th century. It provided a new material with uses ranging from electronic engineering to bioengineering. It even transformed the game of golf. Perhaps its greatest claim to fame though is that it kick-started the worldwide telecoms boom of the 19th century that ultimately led to the creation of global networks including the Internet.

Gutta-percha trees are native to South East Asia and Australia. Their sap is similar to rubber. It’s actually a natural polymer: a kind of material made of gigantic molecules built up of smaller structures that are repeated over and over again. Plastics, amber, silk, rubber and wool are all made of polymers. Though very similar to it, unlike rubber, Gutta-percha is biologically inert – it doesn’t react with biological materials – and that was the key to its usefulness. It was discovered by Western explorers in the middle of the 17th century, though local Malay people already knew about it and used it.

Chomping wires

So how did it play a part in creating the first global telecom network? Back in the 19th century, the telegraph was revolutionising the way people communicated. It meant messages could be sent across the country in minutes. The trouble was when the messages got to the coast they ground to a halt. Messages could only travel across an ocean as fast as a boat could take them. They could whiz from one end of America to the other in minutes but would then take several weeks to make it to Europe. The solution was to lay down undersea telegraph cables. However, to carry electricity an undersea cable needs to be protected and no one had succeeded in doing that. Rubber had been tried as an insulating layer for the cables but marine animals and plants just attacked it, and once the cable was open to the sea it became useless for sending signals. Gutta-percha on the other hand is a great insulator too but it doesn’t degrade in sea-water.

As it was the only known material that worked, soon all marine cable used Gutta-percha and as a result the British businessmen who controlled its supply became very rich. Soon telegraph cables were being laid everywhere – the original global telecoms network. To start with the network carried telegraph signals then was upgraded to voice and now is based on fibre-optics – the backbone of the Internet.

Rotting teeth

Gutta-percha has also been used by dentists – just as marine animals don’t attack it, it doesn’t degrade inside the human body either. That together with it being easy to shape makes it perfect for dental work. For example, it is used in root canal operations. The pulp and other tissue deep inside a rotting tooth are removed by the dentist leaving an empty chamber. Gutta-percha turns out to be an ideal material to fill the space, though medical engineers and materials scientists are trying to develop synthetic materials like Gutta-percha, but that have even better properties for use in medicine and dentistry.

Dimpled balls

That just leaves golf! Early golf balls were filled with feathers. In 1848 Robert Adams Paterson came up with the idea of making them out of Gutta-percha since it was much easier to make than the laborious process of sewing balls of feathers. It was quickly realised, if by accident, that after they had been used a few times they would fly further. It turned out this was due to the dimples that were made in the balls each time they were hit. The dimples improved the aerodynamics of the ball. That’s why modern golf balls are intentionally covered in dimples.

So gutta-percha has revolutionised global communications, changed the game of golf and even helped people with rotting teeth. Not bad for a tree.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Even the dolphins use pocket switched networks!

(from the archive)

Dolphin leaping in waves off Panama City
Image by Heather Williams from Pixabay

Email, texting, Instant Messaging, Instant response…one of the things about modern telecoms is that they fuel our desire to “talk” to people anytime, anywhere, instantly. The old kind of mail is dismissed as “snail mail”. A slow network is a frustrating network. So why would anyone be remotely interested in doing research into slow networks? Surprisingly, slow networks deserve study. Professor Jon Crowcroft of the University of Cambridge and his team were early researchers of this area, and this kind of network could be the network of the future. The idea is already being used by the dolphins (not so surprising I suppose given according to Douglas Adams’ “The HitchHiker’s Guide to the Galaxy” they are the second most intelligent species on Earth…after the mice).

From node to node

Traditional networks rely on having lots of fixed network “nodes” with lots of fast links between them. These network nodes are just the computers that pass on the messages from one to the other until the messages reach their destinations. If one computer in the network fails, it doesn’t matter too much because there are enough connections for the messages to be sent a different way.

There are some situations where it is impractical to set up a network like this though: in outer space for example. The distances are so far that messages will take a long time – even light can only go so fast! Places like the Arctic Circle are another problem: vast areas with few people. Similarly, it’s a problem under the sea. Signals don’t carry very well through water so messages, if they arrive at all, can be muddled. After major disasters like Hurricane Katrina or a Tsunami there are also likely to be problems.

It is because of situations like these that computer scientists started thinking about “DNTs”. The acronym can mean several similar things: Delay Tolerant Networks (like in space the network needs to cope with everything being slow), Disruption Tolerant Networks (like in the deep sea where the links may come and go) or Disaster tolerant networks (like a Tsunami where lots of the network goes down at once). To design networks that work well in these situations you need to think in a different way. When you also take into account that computers have gone mobile – they no longer just sit on desks but are in our pockets or handbags, this leads to the idea of a “ferrying network” or as Jon Crowcroft calls them: “Pocket Switched Network”. The idea is to use the moving pocket computers to make up a completely new kind of network, where some of the time messages move around because the computers carrying them are moving themselves, not because the message itself is moving. As they move around they pass near other computers and can exchange messages, carrying a message on for someone else until it is near another computer it can jump to.

From Skidoo to you

A skiddo with driver standing next to it
Image by raul olave from Pixabay

How might such networks be useful in reality? Well one was set up for the reindeer farmers in the Arctic Circle. They roam vast icy wastelands on skidoos, following their reindeer. They are very isolated. There are no cell phone masts or internet nodes and for long periods they do not meet other people at all. The area is also too large to set up a traditional network cheaply. How could they communicate with others?

They set up a form of pocket switched network. Each carried a laptop on their skidoo. A series of computers were also set up sitting in tarns spread around the icy landscape. When the reindeer farmers using the network want a service, like delivering a message, the laptop stores the request until they pass within range of one of the other computers perhaps on someone else’s skidoo. The computer then automatically passes the message on. The new laptop takes the message with it and might later pass a tarn, where the message hops again then waits till someone else passes by heading in the right direction. Eventually it makes a hop to a computer that passes within range of a network point connected to the Internet. It may take a while but the mail eventually gets through – and much faster than waiting for the farmer to be back in net contact directly.

Chatting with Dolphins

Even the dolphins got in on the act. US scientists wanted to monitor coastal water quality. They hit on the idea of strapping sensors onto dolphins that measure the quality wherever they go. Only problem is dolphins spend a lot of time in deep ocean where the results can’t easily be sent back. The solution? Give them a normal (well dolphin adapted) cell phone. Their phone stores the results until it is in range of their service provider off the coast. By putting a receiver in the bays the dolphins return to most frequently, they can call home to pass on the data whenever there.

The researchers encountered an unexpected problem though. The dolphin’s memory cards kept inexplicably filling up. Eventually they realised this was because the dolphins kept taking trips across the Atlantic where they came in range of the European cell networks. The European telecom companies, being a friendly bunch, sent lots of text messages welcoming these newly appeared phones to their network. The memory cards were being clogged up with “Hellos”!

The Cambridge team investigated how similar networks might best be set up and used for people on the move, even in busy urban environments. To this end they designed a pocket switched network called Haggle. Using networks like Haggle, it is possible to have peer-to-peer style networks that side-step the commercial networks. If enough people join in then messages can just hop from phone to phone, using bluetooth links say, as they passed near each other. They might eventually get to the destination without using any long distance carriers at all.

The more the merrier

With a normal network, as more people join the network it clogs up as they all try to use the same links to send messages at the same time. Some fundamental theoretical results have shown that with a pocket switched network, the capacity of the network can actually go up as more people join – because of the way the movement of the people constantly make new links.

Pocket switched networks are a bit like gases – the nodes of the network are like gas molecules constantly moving around. A traditional network is like a solid – all the molecules, and so nodes, are stationary. As more people join a gaseous network it becomes more like a liquid, with nodes still moving but bumping into other nodes more often. The Cambridge team explored the benefits of networks that can automatically adapt in this way to fit the circumstances: making phase transitions just like water boiling or freezing.

One of the important things to understand to design such a network is how people pass others during a typical day. Are all people the same when it comes to how many people they meet in a day? Or are there some people that are much more valuable as carriers of messages. If so those are the people the messages need to get to to get to the destination the fastest!

To get some hard data Jon and his students handed out phones. In one study a student handed out adapted phones at random on a Hong Kong street, asking that they be returned a fixed time later. The phones recorded how often they “met” each other before being returned. In another similar experiment the phones were given out to a large number of Cambridge students to track their interactions. This and other research shows that to make a pocket switched network work well, there are some special people you need to get the messages to! Some people meet the same people over and over, and very few others. They are “cliquey” people. Other more “special” people regularly cross between cliques – the ideal people to take messages across groups. Social Anthropology results suggest there are also some unusual people who rather than just networking with a few people, have thousands of contacts. Again those people would become important message carriers.

So the dolphins may have been the “early adopters” of pocket switched networks but humans may follow. If we were to fully adopt them it could completely change the way the telecom industry works…and if we (or the dolphins) ever do decide to head en-mass for the far reaches of the solar system, pocket switched networks like Haggle will really come into their own.

– Paul Curzon, QMUL, based on a talk given by Jon Crowcroft at Queen Mary in Jan 2007.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

NASA’s interstellar probe Voyager 1 went silent until computer scientists transmitted a fix that had to travel 15 billion miles!

by Jo Brodie, Queen Mary University of London

In 1977 NASA scientists at the Jet Propulsion Laboratory launched the interstellar probe Voyager 1 into space – and it just keeps going. It has now travelled 15 BILLION miles (24 billion kilometres), which is the furthest any human-made thing has ever travelled from Earth. It communicates with us here on Earth via radiowaves which can easily cross that massive distance between us. But even travelling at the speed* of light (all radiowaves travel at that speed) each radio transmission takes 22.5 hours, so if NASA scientists send a command they have to wait nearly two days for a response. (The Sun is ‘only’ 93 million miles away from Earth and its light takes about 8 minutes to reach us.)

FDS – The Flight Data System

The Voyager 1 probe has sensors to detect things like temperature or changes in magnetic fields, a camera to take pictures and a transmitter to send all this data back to the scientists on Earth. One of its three onboard computers (the Flight Data System, or FDS) takes that data, packages it up and transmits it as a stream of 1s and 0s to the waiting scientists back home who decode it. Voyager 1 is where it is because NASA wanted to send a probe out beyond the limits of our Solar System, into ‘interstellar space’ far away from the influence of our Sun to see what the environment is like there. It regularly sends back data updates which include information about its own health (how well its batteries are doing etc) along with the scientific data, packaged together into that radio transmission. NASA can also send up commands to its onboard computers too. Computers that were built in 1977!

The pale blue dot

‘The Pale Blue Dot’. In the thicker apricot-coloured band on the right you might be able to see
a tiny dot about halfway down. That’s the Earth! Full details of this famous 1990 photo here.

Although its camera is no longer working its most famous photograph is this one, the Pale Blue Dot, a snapshot of every single person alive on the 14th of February 1990. However as Voyager 1 was 6 billion miles from home by then when it looked back at the Earth to take that photograph you might have some difficulty in spotting anyone! But they’re somewhere in there, inside that single pixel (actually less than a pixel!) which is our home.

As Voyager 1 moved further and further away from our own planet, visiting Jupiter and Saturn before travelling to our outer Solar System and then beyond, the probe continued to send data and receive commands from Earth. 

The messages stopped making sense

All was going well, with the scientists and Voyager 1 ‘talking’ to one another, until November 2023 when the binary 1s and 0s it normally transmitted no longer had any meaningful pattern to them, it was gibberish. The scientists knew Voyager 1 was still ‘alive’ as it was able to send that signal but they didn’t know why its signal no longer made any sense. Given that the probe is nearly 50 years old and operating in a pretty harsh environment people wondered if that was the natural end of the project, but they were determined to try and re-establish normal contact with the probe if they could. 

Searching for a solution

They pored over almost-50 year old paper instruction manuals and blueprints to try and work out what was wrong and it seemed that the problem lay in the FDS. Any scientific data being collected was not being correctly stored in the ‘parcel’ that was transmitted back to Earth, and so was lost – Voyager 1 was sending empty boxes. At that distance it’s too far to send an engineer up to switch it off and on again so instead they sent a command to try and restart things. The next message from Voyager 1 was a different string of 1s and 0s. Not quite the normal data they were hoping for, but also not entirely gibberish. A NASA scientist decoded it and found that Voyager 1 had sent a readout of the FDS’ memory. That told them where the problem was and that a damaged chip meant that part of its memory couldn’t be properly accessed. They had to move the memory from the damaged chip.

That’s easier said than done. There’s not much available space as the computers can only store 68 kilobytes of data in total (absolutely tiny compared to today’s computers and devices). There wasn’t one single place where NASA scientists could move the memory as a single block, instead they had to break it up into pieces and store it in different places. In order to do that they had to rewrite some of the code so that each separated piece contained information about how to find the next piece. Imagine if a library didn’t keep a record of where each book was, it would make it very hard to find and read the sequel! 

Earlier this year NASA sent up a new command to Voyager 1, giving it instructions on how to move a portion of its memory from the damaged area to its new home(s) and waited to hear back. Two days later they got a response. It had worked! They were now receiving sensible data from the probe.  

Voyager team celebrates engineering data return, 20 April 2024 (NASA/JPL-Caltech). “Shown are Voyager team members Kareem Badaruddin, Joey Jefferson, Jeff Mellstrom, Nshan Kazaryan, Todd Barber, Dave Cummings, Jennifer Herman, Suzanne Dodd, Armen Arslanian, Lu Yang, Linda Spilker, Bruce Waggoner, Sun Matsumoto, and Jim Donaldson.”

For a while they it was just basic ‘engineering data’ (about the probe’s status) but they knew their method worked and didn’t harm the distant traveller. They also knew they’d need to do a bit more work to get Voyager 1 to move more memory around in order for the probe to start sending back useful scientific data, and…

Success!

… …in May, NASA announced that scientific data from two of Voyager 1’s instruments was finally being sent back to Earth and in June the probe was fully operational. You can follow Voyager 1’s updates on Twitter / X via @NASAVoyager.

Did you know?

Both Voyager 1 and Voyager 2 carry with them a gold-plated record called ‘The Sounds of Earth‘ containing “sounds and images selected to portray the diversity of life and culture on Earth”. Hopefully any aliens encountering it will have a record player (but the Voyager craft do carry a spare needle!) Credit: NASA/JPL

References

Lots of articles helped in the writing of this one and you can download a PDF of them here. Featured image credit showing the Voyager spacecraft: NASA/JPL.

*radiowaves and light are part of the electromagnetic or ‘EM’ spectrum along with microwaves, gamma rays, X-rays, ultraviolet and infra red. All these waves travel at the same speed in a vacuum, the speed of light (300,000,000 metres per second, sometimes written as 3 x 108 m/s or (m s-1)), but the waves differ by their frequency and wavelength.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


EPSRC supports this blog through research grant EP/W033615/1.