Faster fiber

Polina Bayvel, Professor of Optical Communications, at UCL, and her team have just set a new speed record for sending data over real-world optical cable. They managed to send about 10 times more data than the best commercial services. Remarkable this was without changing the cable or other core infrastructure: the record was set over existing fiber running through a city centre with all the interference that causes, and with all the grime, wear and tear that comes with real use.

What was the secret? Commercial fibre optics typically use wavelengths of 850, 1300 and 1550 nanometers. That is infrared light, which some animals can see, including some snakes, fish and insects (and vampire bats). However, we need special cameras that convert it to the visible range before we can see infrared. What we can do though is create lasers that send pulses of infrared at these wavelengths. We can also design hardware that turns infrared pulses back into data. Polina’s team developed special hardware that could send data over a much larger range of frequencies of light than the existing commercial systems. It used a range of wavelengths of light between 1264 and 1618 nanometers. By mixing these higher wavelengths together they could send more data at the same time – but that is only useful if their hardware could extract the separate signals from the mixed up mess of them, back at the end. The test showed that their hardware could do that in the real-world conditions of sending data from their lab in central London out to a data centre at Canary Wharf over the existing cables, and back, so around 10 miles in total.

It means that in future we will be able to send far more data over existing cable networks with no need to replace the cables, so avoiding the extra time and costs (never mind the road works). The speed of 450 terabits per second is enough to stream 50 million films at the same time. No one actually needs to do that of course. However, our technologies do seem to voraciously use up whatever capacity we create, and with the ever-increasing use of AI tools and their need for masses of data, it may well be this ability to send more data is needed sooner than we might think.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Going Postal: A review

Semaphore tower showing all the flag positions
Image by Clker-Free-Vector-Images from Pixabay adapted by CS4FN

Any one claiming to be a hard-core Computer Scientist would be ashamed if they had to admit they hadn’t read Terry Pratchett. If you are and you haven’t, then ‘Going Postal’ is a good place to start.

‘Going Postal’, is a must for anyone interested in networks. Not because it has any bearing on reality. It doesn’t. It’s about Discworld, a flat world that is held up on the back of elephants, and where magic reigns. Technology is starting to get a foothold though. For example, cameras, computers and movies have all been invented…though they usually have an Elf inside. Take cameras: they work because the Elf has a paint box and an easel. Take too many sunsets and he’ll run out of pink! It is all incredibly silly…but it works and so does the technology.

Now telecommunications technology is gaining a foothold…Corrupt business is muscling in and the post office is struggling to survive. Who would want to send a letter when they can send a c-mail over the Clacks? The Clacks are a network of semaphore towers that allow messages to ‘travel at the speed of light’.

At each tower the operators

“pound keys, kick pedals and pull levers as fast as they can'”

to forward the message to the next tower in the network and so on to their destination. The clacks are so fashionable, people have even started carrying pocket semaphore flags everywhere they go, so they can send messages to people on the other side of the room.

“But can you write
S.W.A.L.K. on a clacks?
Can you seal it with
a loving kiss?
Can you cry tears
on to a clacks,
can you smell it,
can you enclose
a pressed flower?
A letter is more than
just a message.”

Moist von Lipwig, a brilliant con-artist who just did one con too many, is given the job of saving the Post-office…his choice was ‘Take the job or die’. Not, actually, such a good deal given the last few Postmasters all died on the job … in the space of a few weeks.

Will he save the post office, or is the march of technology unstoppable?…and just who are the ‘Smoking GNU’ that you hear whispers about on the Clacks?

Reading this book has got to be the most fun way imaginable of learning about telecom networks, not to mention entrepreneurship and the effect of computers on society. None of the actual technology is the same as in our world of course, but the principle is the same: transmission codes, data and control signals, simplex and duplex transmissions, image encoding, internet nodes, encryption, e-commerce, phreakers and more…they are all there, which just goes to show computer science is not just about our current computer technology. It all applies even when there is no silicon in sight.

Oh, and this is the 33rd Discworld novel, so if you do get hooked, don’t expect to get much more done for the next few weeks as you catch up.

Paul Curzon, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The first Internet concert

Severe Tire Damage
Severe Tire Damage. Image by Strubin, CC BY-SA 4.0 via Wikimedia Commons

Which band was the first to stream a concert live over the Internet? The Rolling Stones decided, in 1994, it should be them. After all, they were one of the greatest, most innovative rock bands of all time. A concert from their tour of that year, in Dallas, was therefore broadcast live. Mick Jagger addressed the world not just the 50,000 packed into the stadium welcoming the world with “I wanna say a special welcome to everyone that’s, climbed into the Internet tonight and, uh, has got into the MBone. And I hope it doesn’t all collapse.” Unknown to them, when planning this publicity coup, another band had got there first: a band of Computer Scientists from Xerox PARC, DEC and Apple, the research centres responsible for many innovations including many of the ideas around graphical user interfaces, networks and multimedia internet had played live on the Internet the year before!

The band which actually went down in history was called Severe Tire Damage. Its members were Russ Haines and Mark Manasse (from DEC), Steven Rubin (a Computer Aided design expert from Apple) and Mark Weiser (famous for the ideas behind calm computing, from Xerox PARC). They were playing a concert at Xerox PARC on  June 24, 1993. At the time researchers there were working on a system called MBone which provided a way to do multimedia over the Internet for the first time. Now we take that for granted (just about everyone with a computer or phone doing Zoom and Teams calls, for example) but then the Internet was only set up for exchanging text and images from one person to another. MBone, short for multicast backbone, allowed packets of data of any kind (so including video data) from one source to be sent to multiple Internet addresses rather than just to one address. Sites that joined the MBone could send and receive multimedia data, including video, live to all the others in one broadcast. This meant for the first time, video calls between multiple people over the Internet were possible. They needed to test the system, of course, so set up a camera in front of Severe Tire Damage and live-streamed their performance to other researchers on the nascent MBone round the world (research can be fun at the same time as being serious!). Possibly there was only a single Australian researcher watching at the time, but it is the principle that counts!

On hearing about the publicity around the Rolling Stones concert, and understanding the technology of course, they decided it was time for one more live internet gig to secure their place in history. Immediately, before the Rolling Stones started their gig, Severe Tire Damage broadcast their own live concert over the MBone to all those (including journalists) waiting for the main act to arrive online. In effect they had set themselves up as an Internet un-billed opening act for the Stones even though they were nowhere near Dallas. Of course that is partly the point, you no longer had to all be on one place to be part of the same concert. So, the Rolling Stones, sadly for them, weren’t even the first to play live over the Internet on that particular day, never mind ever!

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Robert Weitbrecht and his telecommunication device for the deaf

Robert Weitbrecht was born deaf. He went on to become an award winning electronics scientist who invented the acoustic coupler (or modem) and a teletypewriter (or teleprinter) system allowing the deaf to communicate via a normal phone call.

A modem telephone: the telephone slots into a teletypewriter here with screen rather than printer.
A telephone modem: Image by Juan Russo from Pixabay

If you grew up in the UK in the 1970s with any interest in football, then you may think of teleprinters fondly. It was the way that you found out about the football results at the final whistle, watching for your team’s result on the final score TV programme. Reporters at football grounds across the country, typed in the results which then appeared to the nation one at a time as a teleprinter slowly typed results at the bottom of the screen. 

Teleprinters were a natural, if gradual, development from the telegraph and Morse code. Over time a different simpler binary based code was developed. Then by attaching a keyboard and creating a device to convert key presses into the binary code to be sent down the wire you code type messages instead of tap out a code. Anyone could now do it, so typists replaced Morse code specialists. The teleprinter was born. In parallel, of course, the telephone was invented allowing people to talk to each other by converting the sound of someone speaking into an electrical signal that was then converted back into sound at the other end. Then you didn’t even need to type, never mind tap, to communicate over long distances. Telephone lines took over. However, typed messages still had their uses as the football results example showed.

Another advantage of the teletypewriter/teleprinter approach over the phone, was that it could be used by deaf people. However, teleprinters originally worked over separate networks, as the phone network was built to take analogue voice data and the companies controlling them across the world generally didn’t allow others to mess with their hardware. You couldn’t replace the phone handsets with your own device that just created electrical pulses to send directly over the phone line. Phone lines were for talking over via one of their phone company’s handsets. However, phone lines were universal so if you were deaf you really needed to be able to communicate over the phone not use some special network that no one else had. But how could that work, at a time when you couldn’t replace the phone handset with a different device?

Robert Weitbrecht solved the problem after being prompted to do so by deaf orthodontist, James Marsters. He created an acoustic coupler – a device that converted between sound and electrical signals –  that could be used with a normal phone. It suppressed echoes, which improved the sound quality. Using old, discarded teletypewriters he created a usable system Slot the phone mouthpiece and ear piece into the device and the machine “talked” over the phone in an R2D2 like language of beeps to other machines like it. It turned the electrical signals from a teletypewriter into beeps that could be sent down a phone line via its mouthpiece. It also decoded beeps when received via the phone earpiece in the electrical form needed by the teleprinter. You typed at one end, and what you typed came out on the teleprinter at the other (and vice versa). Deaf and hard of hearing people could now communicate with each other over a normal phone line and normal phones! The idea of Telecommunications Device for the Deaf that worked with normal phones was born. However, they still were not strictly legal in the US so James Marsters and others lobbied Washington to allow such devices.

The idea (and legalisation) of acoustic couplers, however, then inspired others to develop similar modems for other purposes and in particular to allow computers to communicate via the telephone network using dial-up modems. You no longer needed special physical networks for computers to link to each other, they could just talk over the phone! Dial-up bulletin boards were an early application where you could dial up a computer and leave messages that others could dial up to read there via their computers…and from that idea ultimately emerged the idea of chat rooms, social networks and the myriad other ways we now do group communication by typing.

The first ever (long distance) phone call between two deaf people (Robert Weitbrecht and James Marsters) using a teletypewriter / teleprinter was:

“Are you printing now? Let’s quit for now and gloat over the success.”

Yes, let’s.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Adrian Stokes: Internet pioneer

An abstract schematic of the UK part of the ARPANET. RAL and others connect to UCL which connects to the US via Norway.
Image by Paul Curzon

We take the Internet for granted now, but it is not that long ago that it did not exist at all. Disabled from birth with spina bifida, Adrian Stokes, OBE was one of the people who helped build it: a celebrated “Internet pioneer”. He was, for example, responsible for setting up the first email service in the UK and so the first transatlantic email system, as well as providing the service linking other universities in the UK to the network making it work as a network of networks in different countries.

He worked on ARPANET, the precursor to the Internet. It was a research project funded by the US department of defence exploring the future of communication networks. Up to that point there were global networks but they were based on what is called circuit switching. Think of an old fashioned telephone exchange, Each person had a direct line – an electrical circuit – connecting them to the operator. When you talked to the operator and asked to talk to someone over the phone, the operator would plug a wire that connected your line to theirs, making a new direct circuit from you to them. When you talked, your voice was converted to an analogue signal (a changing electrical signal) which passed down that wire – along the circuit. Transatlantic telephone cables even allowed circuits, so phone calls, to be set up between countries. Early computers connected to each other, sending data over phone lines in this way by converting them into sounds.

ARPANET worked differently to a circuit-based system. It was a packet switched network. It worked by treating data sent over a network as binary, just as the computer itself does internally. This contrasted with the analogue system then used to send sound over early phones. Importantly, the binary data being sent was divided up into fixed size groups of bits called packets. Each packet was then sent separately over the network. In this system there is no fixed circuit from source to destination that the data travels down, just lots of different computers connected to each other, On receiving packets of data each computer or node of the network passes it on to another until eventually it arrives at the target computer. A key advantage to this is that each of those packets can go by a different route, travelling between different computers. They can even arrive out of order, The data no longer travels along a single circuit. The packets are put back together (in the right order) on reaching the destination, reconstructing the original so that the fact it was ever split up is invisible to the person receiving the data. Extra information is added to the packets beyond the actual data to make the system work: such as a destination address to indicate where it is going to and the number of the packet so the order can be reconstructed if they do arrive out of order. Managing the packets and their journey to the destination is done by software implementing a protocol (a set of communication rules agreed between the computers on the network, that allows them to interpret the streams of bits arriving from other computers).

So ARPANET consisted of a series of computers that acted as nodes of the network. Each had to be programmed with software to run the protocol, passing packets on in their journey to the destination and pulling the original data out and reconstructing it if that computer was their destination. UCL were working with the ARPANET team, exploring how to make it work across continents, so had to program one of their computers to make it an ARPANET node. Once done it could connect to the ARPANET via a satellite link in Norway.

At first, the ARPANET was set up as a way just to access data on other computers as though it was on your own local computer. However, other services could be provided on top of the basic protocols. It just amounts to writing code for your node’s computer to turn data into packets and interpret the data in packets arriving in the way needed for the new application. For example, a way to access files on other computers as though they were on yours were added. Much, much later of course code to allow communication through a web page service was written and the world wide web was born to sit on top of the Internet.

This was one of the jobs Adrian Stokes did. He wrote code for the UCL computers that could treat packets of data as email messages rather than just files. Users could write messages and send them to people on other computers on ARPANET without them needing to know where they actually were. It was the first UK email service.

Once UCL had a link to the ARPANET, they could also extend ARPANET. One of Adrian’s other jobs was in managing onward links around the UK, creating a UK ARPANET network. Researchers in other UK universities could set up their own computers as ARPANET nodes (write and run the software on their computer) and then connect their computers to the UCL one. Networks their computers were linked to could then also connect to the ARPANET. In doing so they created a UK ARPANET network but one that was also connected to the full ARPANET via the UCL computer. It meant, for example, that anyone on the ARPANET in the US could (with permission as UCL added password protection to their node – the first on the ARPANET!) access the powerful IBM System 360/195 computer at the Rutherford and Appleton Labs in Oxfordshire. ARPANET became a transatlantic network of connected networks. Any of those UK universities could also then connect to any computer anywhere on the ARPANET. Their packets just went to the UCL one and then to the US via the satellite link, before being forwarded onwards to other US computers. If these UK university computers had the programs for the file transfer or email services, for example, then they could seamlessly use them to access files anywhere else or send messages to anyone else connected to the ARPANET anywhere.

ARPANET ultimately turned into what we now call the Internet. No single person invented the Internet, it was a massive team effort with lots of people involved each responsible for getting some part of it to work. Those like Adrian who played a critical part in making it work, however, have been recognised as “Internet pioneers”: those who can justifiably claim they were part of the team that invented the Internet, and transformed all our lives as a result.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Why is your Internet so slow?

Red and white lights of cars on a a motorway at night
Image from Pixabay

The Internet is now so much a part of life that, unless you are over 50, it’s hard to remember what the world was like without it. Sometimes we enjoy really fast Internet access, and yet at other times it’s frustratingly slow! So the question is why, and what does this have to do with posting a letter, or cars on a motorway? And how did electronic engineers turn the problem into a business opportunity?.

The communication technology that powers the Internet is built of electronics. The building blocks are called routers, and these convert the light-streams of information that pass down the fibre-optic cables into streams of electrons, so that electronics can be used to switch and re-route the information inside the routers.

Enormously high capacities are achievable, which is necessary because the performance of your Internet connection is really important, especially if you enjoy online gaming or do a lot of video streaming. Anyone who plays online games would be familiar with the problem: opponents apparently popping out of nowhere, or stuttery character movement.

So the question is – why is communicating over a modern network like the Internet so prone to odd lapses of performance when traditional land-line telephone services were (and still are) so reliable? The answer is that traditional telephone networks send data as a constant stream of information, while over the Internet, data is transmitted as “packets”. Each packet is a large group of data bits stuck inside a sort of package, with a header attached giving the address of where the data is going. This is why it is like posting a letter: a packet is like a parcel of data sent via an electronic “postal service”.

But this still doesn’t really answer the question of why Internet performance can be so prone to slow down, sometimes seeming almost to stop completely. To see this we can use another analogy: the flow of packet data is also like the flow of cars on a motorway. When there is no congestion the cars flow freely and all reach their destination with little delay, so that good, consistent performance is enjoyed by the car’s users. But when there is overload and there are too many cars for the road’s capacity, then congestion results. Cars keep slowing down then speeding up, and journey times become horribly delayed and unpredictable. This is like having too many packets for the capacity in the network: congestion builds up, and bad delays – poor performance – are the result.

Typically, Internet performance is assessed using broadband speed tests, where lots of test data is sent out and received by the computer being tested and the average speed of sending data and of receiving it is measured. Unfortunately, speed tests don’t help anyone – not even an expert – understand what people will experience when using real applications like an online game.

Electronic engineering researchers at Queen Mary, University of London have been studying these congestion effects in networks for a long time, mainly by using probability theory, which was originally developed in attempts to analyse games of chance and gambling. In the past ten years, they have been evaluating the impact of congestion on actual applications (like web browsing, gaming and Skype) and expressing this in terms of real human experience (rather than speed, or other technical metrics). This research has been so successful that one of the Professors at Queen Mary, Jonathan Pitts, co-founded a spinout company called Actual Experience Ltd so the research could make a real difference to industry and so ultimately to everyday users.

For businesses that rely heavily on IT, the human experience of corporate applications directly affects how efficiently staff can work. In the consumer Internet, human experience directly affects brand perception and customer loyalty. Actual Experience’s technology enables companies to manage their networks and servers from the perspective of human experience – it helps them fix the problems that their staff and customers notice, and invest their limited resources to get the greatest economic benefit.

So Internet gaming, posting letters, probability theory and cars stuck on motorways are all connected. But to make the connection you first need to study electronic engineering.

– Paul Curzon, Queen Mary University of London.

This article was originally published on the CS4FN website. It was also published in our 2023 Advent Calendar.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …



Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Claude Shannon: Inventing for the fun of it

Image by Paul Curzon

Claude Shannon, inventor of the rocket powered Frisbee, gasoline powered pogo stick, a calculator that worked using roman numerals, and discoverer of the fundamental equation of juggling! Oh yeah, and founder of the most important theory underpinning all digital communication: information theory.

Claude Shannon is perhaps one of the most important engineers of the 20th century, but he did it for fun. Though his work changed the world, he was always playing with and designing things, simply because it amused him. Like his contemporary Richard Feynman, he did it for ‘the pleasure of finding things out.’

As a boy, Claude liked to build model planes and radio-controlled boats. He once built a telegraph system to a friend’s house half a mile away, though he got in trouble for using the barbed wires around a nearby pasture. He earned pocket money delivering telegrams and repairing radios.

He went to the University of Michigan, and then worked on his Masters at MIT. While there, he thought that the logic he learned in his maths classes could be applied to the electronic circuits he studied in engineering. This became his Masters thesis, published in 1938. It was described as ‘one of the most important Master’s theses ever written… helped to change digital circuit design from an art to a science.’

Claude Shannon is known for his serious research, but a lot of his work was whimsical. He invented a calculator called THROBAC (Thrifty Roman numerical BACkward looking computer), that performs all its operations in the Roman numeral system. His home was full of mechanical turtles that would wander around, turning at obstacles; a gasoline-powered pogostick and rocket-powered Frisbee; a machine that juggled three balls with two mechanical hands; a machine to solve the Rubik’s cube; and the ‘Ultimate Machine’, which was just a box that when turned on, would make an angry, annoyed sound, reach out a hand and turn itself off. As Claude once explained with a smile, ‘I’ve spent lots of time on totally useless things.’

A lot of the early psychology experiments used to involve getting a mouse to run through a maze to reach some food at the end. By performing these experiments over and over in different ways, they could figure out how a mouse learns. So Claude built a mouse-shaped robot called Theseus. Theseus could search a maze until he solved it, and then use this knowledge to find its way through the maze from any starting point.

Oh, and there’s one other paper of his that needs mentioning. No, not the one on the science of juggling, or even the one describing his ‘mind reading’ machine. In 1948 he published ‘A mathematical theory of communication.’ Quite simply, this changed the world, and changed how we think about information. It laid the groundwork for a lot of important theory used in developing modern cryptography, satellite navigation, mobile phone networks… and the internet.

– Paul Curzon, Queen Mary University of London.


More on …


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

This quantum message will self-destruct in 10 seconds…

Mission Impossible always involved the team taking on apparently impossible missions, delivered by a message concluding with the famous line that “This message will self-destruct in 10 seconds”. It was always followed by the message physically destructing  in some dramatic way such as flames or smoke coming from the tape recorder. Now, it’s been shown that it is possible to actually do apparently impossible destruction of messages: to send holographic messages that the sender can just make disappear even after they have been sent. It relies on the apparently impossible, but real properties of quantum physics.

A hologram is a 3-dimensional image formed using laser light. It records light scattered from objects coming from lots of different directions. This differs from photography where the light recorded comes from one direction only. You can see examples on the back of bank cards (often a flying dove) where they are used as a hard-to-copy security device. 

Now researchers at the University of Exeter have shown it is possible to make quantum holograms that make use of quantum effects. They are made from entangled photons: pairs of light particles that have been linked together in a way that means that, after the entangling, what ever happens to one immediately affects the other too … however far apart they are. Entanglement is one of those weird properties of quantum physics, the physical properties of the very, very small. It means that subatomic particles, once entangled, can later instantly affect each other even when separated by large distances.

This effect has now been put to novel use by Jensen Li and team in their research at Exeter. They entangled streams of pairs of photons emitted from a crystal using lasers but then separated the pairs. One stream of photons from the pairs was used to create a holographic image on a special kind of material called a meta-material. Meta-materials are just materials engineered at very tiny scales so as to have properties not usually seen in nature. For example, they might be designed to carefully control light or radio waves by reflecting them very precisely in certain directions. One use of that might be so that the object bounces light round from behind it so appears invisible. Some butterfly wings and bird feathers (think peacocks and kingfishers) actually do a similar sort of thing with very precise microscopic scale surface structures that cause their startlingly bright, shimmering colours.

Exeter’s meta-material was flat but with a special surface designed to have tiny features that manipulate light in very precise ways that create a hologram based on the information encoded in the beam of laser light. In their first test that showed their quantum hologram system works, the hologram just showed the letters H,D,V, A. The light from this hologram continued on to a camera, so a picture of the hologram could be taken. So far so normal.

The cunning (and rather weird) thing though is due to what they did to the other stream of light. Each photon in this stream was entangled with a photon in the hologram light stream. Due to the quantum physics of entanglement, that meant that changes to these particles could affect those making the hologram. In particular, the Exeter team had this second stream pass through a polarising filter, essentially like the lens of polaroid sunglasses. Light vibrates in different directions. A sunglasses lens cuts out the light vibrating in a given direction. Now, the letter H in the message was created from light polarised horizontally unlike the other letters which were polarised vertically. This meant that when the second stream of light was passed through a polarising filter blocking out the horizontally polarised light, it also affected the photons entangled with the blocked photons. The other stream of light, that created the hologram, was affected even though it went nowhere near the polarising filter. The result was that the horizontally polarised H could be made to disappear from the message caught on camera. It really did self-destruct, just in a quantum way.

If scaled up such a system could be used to send messages that are still (instantly) controlled by the sender even after they have been sent, whether disappearing or being changed to say something else. The approach could also be incorporated into secure quantum computing communication systems, where the messages are also encrypted.

Fortunately, this blog is not a quantum blog, so will not self-destruct in 10 seconds … so please do share it with your friends!

Paul Curzon, Queen Mary University of London

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Scilly cable antics

Sunset over the Scilly Isles with a sailing boat in the foreground
Image by Mike Palmer from Pixabay (CROPPED)

Undersea telecommunications cables let the world communicate and led to the world spanning Internet. It was all started by the Victorians. Continents were connected, but closer islands were too including the Scilly Isles.

Autumn 1869. There were great celebrations as the 31 mile long telecommunications cable was finally hauled up the shore and into the hut. The Scilly Isles now had a direct cable communication link to the mainland. But would it work? Several tests messages were sent and it was announced that all was fine. The journalists filed their story. The celebrations could begin.

Except it didn’t actually work! The cable wasn’t connected at all. The ship laying the cable had gone off course. Either that or someone’s maths had been shaky. The cable had actually run out 5 miles off the islands. Not wanting to spoil the party, the captain ordered the line to be cut. Then, unknown to the crowd watching, they just dragged the cut off end of the cable up the beach and pretended to do the tests. The Scilly Isles weren’t actually connected to Cornwall until the following year.

Paul Curzon, Queen Mary University of London (from the archive)

More on …


Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Gutta-Percha: how a tree launched a global telecom revolution

CS4FN Banner

by Paul Curzon, Queen Mary University of London

(from the archive)

Rubber tree being tapped
Image  from Pixabay

Obscure plants and animals can turn out to be surprisingly useful. The current mass extinction of animal and plant species needs to be stopped for lots of reasons but an obvious one is that we risk losing forever materials that could transform our lives. Gutta-percha is a good example from the 19th century. It provided a new material with uses ranging from electronic engineering to bioengineering. It even transformed the game of golf. Perhaps its greatest claim to fame though is that it kick-started the worldwide telecoms boom of the 19th century that ultimately led to the creation of global networks including the Internet.

Gutta-percha trees are native to South East Asia and Australia. Their sap is similar to rubber. It’s actually a natural polymer: a kind of material made of gigantic molecules built up of smaller structures that are repeated over and over again. Plastics, amber, silk, rubber and wool are all made of polymers. Though very similar to it, unlike rubber, Gutta-percha is biologically inert – it doesn’t react with biological materials – and that was the key to its usefulness. It was discovered by Western explorers in the middle of the 17th century, though local Malay people already knew about it and used it.

Chomping wires

So how did it play a part in creating the first global telecom network? Back in the 19th century, the telegraph was revolutionising the way people communicated. It meant messages could be sent across the country in minutes. The trouble was when the messages got to the coast they ground to a halt. Messages could only travel across an ocean as fast as a boat could take them. They could whiz from one end of America to the other in minutes but would then take several weeks to make it to Europe. The solution was to lay down undersea telegraph cables. However, to carry electricity an undersea cable needs to be protected and no one had succeeded in doing that. Rubber had been tried as an insulating layer for the cables but marine animals and plants just attacked it, and once the cable was open to the sea it became useless for sending signals. Gutta-percha on the other hand is a great insulator too but it doesn’t degrade in sea-water.

As it was the only known material that worked, soon all marine cable used Gutta-percha and as a result the British businessmen who controlled its supply became very rich. Soon telegraph cables were being laid everywhere – the original global telecoms network. To start with the network carried telegraph signals then was upgraded to voice and now is based on fibre-optics – the backbone of the Internet.

Rotting teeth

Gutta-percha has also been used by dentists – just as marine animals don’t attack it, it doesn’t degrade inside the human body either. That together with it being easy to shape makes it perfect for dental work. For example, it is used in root canal operations. The pulp and other tissue deep inside a rotting tooth are removed by the dentist leaving an empty chamber. Gutta-percha turns out to be an ideal material to fill the space, though medical engineers and materials scientists are trying to develop synthetic materials like Gutta-percha, but that have even better properties for use in medicine and dentistry.

Dimpled balls

That just leaves golf! Early golf balls were filled with feathers. In 1848 Robert Adams Paterson came up with the idea of making them out of Gutta-percha since it was much easier to make than the laborious process of sewing balls of feathers. It was quickly realised, if by accident, that after they had been used a few times they would fly further. It turned out this was due to the dimples that were made in the balls each time they were hit. The dimples improved the aerodynamics of the ball. That’s why modern golf balls are intentionally covered in dimples.

So gutta-percha has revolutionised global communications, changed the game of golf and even helped people with rotting teeth. Not bad for a tree.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.