RADAR winning the Battle of Britain

Plaque commemorating the Birth of RADAR
Image Kintak, CC BY-SA 3.0 via Wikimedia Commons

The traditional story of how World War II was won is that of inspiring leaders, brilliant generals and plucky Brits with “Blitz Spirit”. In reality it is usually better technology that wins wars. Once that meant better weapons, but in World War II, mathematicians and computer scientists were instrumental in winning the war by cracking the German codes using both maths and machines. It is easy to be a brilliant general when you know the other sides plans in advance!. Less celebrated but just as important, weathermen and electronic engineers were also instrumental in winning World War II, and especially, the Battle of Britain, with the invention of RADAR. It is much easier to win an air battle when you know exactly where the opposition’s planes. It was down largely to meteorologist and electronic engineer, Robert Watson-Watt and his assistant Arnold Wilkins. Their story is told in the wonderful, but under-rated, film Castles in the Sky, starring Eddie Izzard.

****SPOILER ALERT****

In the 1930s, Nazi Germany looked like an ever increasing threat as it ramped up it’s militarisation, building a vast army and air force. Britain was way behind in the size of its air force. Should Germany decide to bomb Britain into submission it would be a totally one-sided battle. SOmething needed to be done.

A hopeful plan was hatched in the mid 1930s to build a death ray to zap pilots in attacking planes. One of the engineers asked to look into the idea was Robert Watson-Watt. He worked for the met office. He was an expert in the practical use of radio waves. He had pioneered the idea of tracking thunderstorms using the radio emissions from lightening as a warning system for planes, developing the idea as early as 1915. This ultimately led to the invention of “Huff-Duff”, shorthand for High Frequency Direction Finding, where radio sources could be accurately tracked from the signals they emitted. That system helped Britain win the U-Boat war, in the North Atlantic, as it allowed anti-submarine ships to detect and track U-Boats when they surfaced to use their radio. As a result Huff-Duff helped sink a quarter of the U-Boats that were attacked. That in itself was vital for Britain to survive the siege that the U-Boats were enforcing sinking convoys of supplies from the US.

However, by the 1930s Watson-Watt was working on other applications based on his understanding of radio. His assistant, Arnold Wilkins, quickly proved that the death ray idea would never work, but pointed out that planes seemed to affect radio waves. Together they instead came up with the idea of creating a radio detection system for planes. Many others had played with similar ideas, including German engineers, but no one had made a working system.

Because the French coast was only 20 minutes flying time away the only way to defend against German bombers would be to have planes patrolling the skies constantly. But that required vastly more planes than Britain could possibly build. If planes could be detected from sufficiently far away, then Spitfires could instead be scrambled to intercept them only when needed. That was the plan, but could it be made to work, when so little progress had been made by others?

Watson-Watt and Wilkins set to work making a prototype which they successfully demonstrated could detect a plane in the air (if only when it was close by). It was enough to get them money and a team to keep working on the idea. Watson-Watt followed a maxim of “Give them the third best to go on with; the second best comes too late, the best never comes”. With his radar system he did not come up with a perfect system, but with something that was good enough. His team just used off-the shelf components rather than designing better ones specifically for the job. Also, once they got something that worked they put it into action. Unlike later, better systems their original radar system didn’t involve sweeping radar signals that bounced off a plane when the sweep pointed at it, but a radio signal blasted in all directions. The position of the plane was determined by a direction finding system Watson-Watt designed based on where the radio signal bounced back from. That meant it took lots of power. However, it worked, and a network of antennas were set up in time for the Battle of Britain. Their radar system, codenamed Chain Home could detect planes 100 miles away. That was plenty of time to scramble planes. The real difficulty was actually getting the information to the air fields to scramble the pilots quickly. That was eventually solved with a better communication system.

The Germans were aware of all the antenna, appearing along the British coast but decided it must be a communications system. Carrots also helped fool them! You may of heard that carrots help you see in the dark. That was just war-time propaganda invented to explain away the ability of the Brits to detect bombers so soon…a story was circulated that due to rationing Brits were eating lots of carrots so had incredible eye-sight as a result!

The Spitfires and their fighter pilots got all the glory and fame, but without radar they would not even have been off the ground before the bombers had dropped their payloads. Practical electronic engineering, Robert Watson-Watt and Arnold Wilkins were the real unsung heroes of the Battle of Britain.

Paul Curzon, Queen Mary University of London

Postscript

In the 1950s Watson-Watt was caught speeding by a radar speed trap. He wrote a poem about it:

A Rough Justice

by Sir Robert Watson-Watt

Pity Sir Watson-Watt,
strange target of this radar plot

And thus, with others I can mention,
the victim of his own invention.

His magical all-seeing eye
enabled cloud-bound planes to fly

but now by some ironic twist
it spots the speeding motorist

and bites, no doubt with legal wit,
the hand that once created it.

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Robert Weitbrecht and his telecommunication device for the deaf

Robert Weitbrecht was born deaf. He went on to become an award winning electronics scientist who invented the acoustic coupler (or modem) and a teletypewriter (or teleprinter) system allowing the deaf to communicate via a normal phone call.

A modem telephone: the telephone slots into a teletypewriter here with screen rather than printer.
A telephone modem: Image by Juan Russo from Pixabay

If you grew up in the UK in the 1970s with any interest in football, then you may think of teleprinters fondly. It was the way that you found out about the football results at the final whistle, watching for your team’s result on the final score TV programme. Reporters at football grounds across the country, typed in the results which then appeared to the nation one at a time as a teleprinter slowly typed results at the bottom of the screen. 

Teleprinters were a natural, if gradual, development from the telegraph and Morse code. Over time a different simpler binary based code was developed. Then by attaching a keyboard and creating a device to convert key presses into the binary code to be sent down the wire you code type messages instead of tap out a code. Anyone could now do it, so typists replaced Morse code specialists. The teleprinter was born. In parallel, of course, the telephone was invented allowing people to talk to each other by converting the sound of someone speaking into an electrical signal that was then converted back into sound at the other end. Then you didn’t even need to type, never mind tap, to communicate over long distances. Telephone lines took over. However, typed messages still had their uses as the football results example showed.

Another advantage of the teletypewriter/teleprinter approach over the phone, was that it could be used by deaf people. However, teleprinters originally worked over separate networks, as the phone network was built to take analogue voice data and the companies controlling them across the world generally didn’t allow others to mess with their hardware. You couldn’t replace the phone handsets with your own device that just created electrical pulses to send directly over the phone line. Phone lines were for talking over via one of their phone company’s handsets. However, phone lines were universal so if you were deaf you really needed to be able to communicate over the phone not use some special network that no one else had. But how could that work, at a time when you couldn’t replace the phone handset with a different device?

Robert Weitbrecht solved the problem after being prompted to do so by deaf orthodontist, James Marsters. He created an acoustic coupler – a device that converted between sound and electrical signals –  that could be used with a normal phone. It suppressed echoes, which improved the sound quality. Using old, discarded teletypewriters he created a usable system Slot the phone mouthpiece and ear piece into the device and the machine “talked” over the phone in an R2D2 like language of beeps to other machines like it. It turned the electrical signals from a teletypewriter into beeps that could be sent down a phone line via its mouthpiece. It also decoded beeps when received via the phone earpiece in the electrical form needed by the teleprinter. You typed at one end, and what you typed came out on the teleprinter at the other (and vice versa). Deaf and hard of hearing people could now communicate with each other over a normal phone line and normal phones! The idea of Telecommunications Device for the Deaf that worked with normal phones was born. However, they still were not strictly legal in the US so James Marsters and others lobbied Washington to allow such devices.

The idea (and legalisation) of acoustic couplers, however, then inspired others to develop similar modems for other purposes and in particular to allow computers to communicate via the telephone network using dial-up modems. You no longer needed special physical networks for computers to link to each other, they could just talk over the phone! Dial-up bulletin boards were an early application where you could dial up a computer and leave messages that others could dial up to read there via their computers…and from that idea ultimately emerged the idea of chat rooms, social networks and the myriad other ways we now do group communication by typing.

The first ever (long distance) phone call between two deaf people (Robert Weitbrecht and James Marsters) using a teletypewriter / teleprinter was:

“Are you printing now? Let’s quit for now and gloat over the success.”

Yes, let’s.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Super-plant supercapacitors

Aloe vera plant
Image by Marco from Pixabay

There are a whole range of plants that have been called superfoods for their amazing claimed health benefits because of the nutrients they contain. But plants can have other super powers too. For example, some are better at absorbing Carbon Dioxide to help with climate change, others provide medicines, or can strip our pollutants out of the air or soil. But one, Aloe Vera, is a super-plant in a new way. It can now store electricity that could be used to power portable devices – by plugging them into the plant.

Capacitors are one of the basic electronic components, like resistors and transistors, that electronic circuits are built from. They act a bit like a tiny battery, building up charge on a pair of surfaces with an insulator between so that charge cannot move directly from one to the other. Electrons build up on one plate, storing energy. When the capacitor is discharged that energy is released. They have a variety of uses including evening out power supplies. A supercapacitor is just a capacitor that can store a lot more energy so is a little like a tiny rechargeable battery, though releases the energy faster and can be charged and discharged many more times.

Various teams around the world have explored the use of aloe vera in supercapacitors. A team of researchers, led by Yang Zhao from Beijing Institute of Technology, has succeeded in creating a supercapacitor made completely from materials extracted from the plant (apart from one gold wire). The parts were made by heating a part of the leaf of the plant, and by freezing its juice. The advantage of this is that the supercapacitor is biodegradable unlike traditional ones made from oil-based synthetic materials. It also makes them biocompatible in that they can be inserted into aloe vera and similar plants without doing them harm and potentially make use of electricity generated by the plant. Her team has inserted these tiny capacitors inside other plants including cacti and aloe vera plants to show this idea works in principle.

So plants can be superheroes and aloe vera more than most: it looks nice on your window cill, you can make soap from it, it supposedly has medicinal value, it is being used in research to remove pollutants from the air and soon it could provide you with electricity too. So next time you are lost in a cactus filled wilderness make sure you have aloe vera capacitors with you so you can charge your gadgets while waiting to be rescued.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An Wang’s magnetic memory

A golden metal torus
Image by Hans Etholen from Pixabay

An Wang was one of the great pioneers of the early days of computing. Just as the invention of the transistor led to massive advances in circuit design and ultimately computer chips, Wang’s invention of magnetic core memory provided the parallel advance needed in memory technology.

Born in Shanghai, An went to university at Harvard in the US, studying for a PhD in electrical engineering. On completing his PhD he applied for a research job there and was set the task of designing a new, better form of memory to be used with computers. It was generally believed that the way forward was to use magnetism to store bits, but no one had worked out a way to do it. It was possible to store data by for example magnetising rings of metal. This could be done by running wires through the rings. Passing the current in one direction set a 1, and in the other a 0 based on the direction of the magnetic field created.

If all you needed was to write data it could be done. However, computers, needed to be able to repeatedly read memory too, accessing and using the data stored, possibly many times. And the trouble was, all the ways that had been thought up to use magnets were such that as soon as you tried to read the information stored in the memory, that data was destroyed in the process of reading it. You could only read stored data once and then it was gone!

An was stumped by the problem just like everyone else, then while walking and pondering the problem, he suddenly had a solution. Thinking laterally, he realised it did not matter if the data was destroyed at all. You had just read it so knew what it was when you destroyed it. You could therefore write it straight back again, immediately. No harm done!

Magnetic-core memory was born and dominated all computer memory for the next two decades, helping drive the computer revolution into the 1970s. An took out a patent for his idea. It was drafted to be very wide, covering any kind of magnetic memory. That meant even though others improved on his design, it meant he owned the idea of all magnetic based memory that followed as it all used his basic idea.

On leaving Harvard he set up his own computer company, Wang Laboratories. It was initially a struggle to make it a success. However, he sold the core-memory patent to IBM and used the money to give his company the boost that it needed to become a success. As a result he became a billionaire, the 5th richest person in the US at one point.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos

Herman Hollerith: from punch cards to a special company

Herman Hollerith
Herman Hollerith (Image from wikimedia, Public Domain)

Herman Hollerith, the son of immigrants, struggled early on at school and then later in bookkeeping at college but it didn’t stop him inventing machines that used punch cards to store data. He founded a company to make and sell his machines. It turned into the company now called IBM, which of course helped propel us into the computer age.

Hollerith had worked as a census clerk for a while, and the experience led to his innovation. The United States has been running a national census every 10 years since the American Revolution, aiming to record the details of every person, for tax and national planning purposes. It is not just a count but has recorded information about each person such as male/female, married or not, ethnicity, whether they can read, disabilities, and so on.

As the population expanded it of course became harder to do. It was also made harder as more data about each person was being collected over time. For the 1890 census a competition was held to try and find better ways to compile the data collected. Herman Holerith won it with his punch card based machine. It could process data up to twice as fast as his competitors and with his system data could be prepared 10 times faster.

To use the machine, the census information for each person was recorded by punching holes in special cards at specific positions. It was a binary system with a hole essentially meaning the specific feature was present (eg they were married) and no hole meaning it wasn’t (eg they were single). Holes against numbers could also mean one of several options.

Hollerith punched card from wikimedia
Hollerith punched card (Image from wikimedia, Public Domain)

The machine could read the holes because they allowed a wire to make an electrical connection to a pool of mercury below so the holes just acted as switches. Data could therefore be counted automatically, with each hole adding one to a different counter. It was the first time that a system of machine-readable data had been used and of course binary went on to be the way all computers store information. In processing the census his machines counted the data on around 100 million cards (an early example of Big Data processing!). This contributed to reducing the time it took to compile the data from the whole country by two years. It also saved about $5 million

Holerith patented the machine and was also awarded a PhD for his work on it. He set up a company to sell it called the Tabulating Machine Company. Over time it merged with other companies until eventually in 1924 the resulting company changed its name to International Business Machines or is it is now known, IBM. it is of course one of the most important companies driving the computer age, building early mainframe computers the size of rooms that revolutionised business computing, but later also responsible for the personal computer, leading to the idea that everyone could own a computer.

Not a bad entrepreneurship legacy for someone who early on at school apparently struggled with, and certainly hated, spelling – he jumped out of a window at school to avoid doing it. He also did badly at bookkeeping in college. He was undeterred by what he was poor at though and focussed on what he was good at, He was hard working and developed his idea for a mechanical tabulating machine for 8 years before his first machine went to work. Patience and determination was certainly a strength that paid off for him!

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Mike Lynch: sequencing success

Mike Lynch was one of Britain’s most successful entrepreneurs. An electrical engineer, he built his businesses around machine learning long before it was a buzz phrase. He also drew heavily on a branch of maths called Bayesian statistics which is concerned with understanding how likely, even apparently unlikely, things are to actually happen. This was so central to his success that he named his super yacht, Bayesian, after it. Tragically, he died on the yacht, when Bayesian sank in a freak, extremely unlikely, accident. The gods of the sea are cruel.

Synthesisers

A keyboard synthesiser
Image by Julius H. from Pixabay

Mike started his path to becoming an entrepreneur at school. He was interested in music, and especially the then new but increasingly exciting, digital synthesisers that were being used by pop bands, and were in the middle of revolutionising music. He couldn’t afford one of his own, though, as they cost thousands. He was sure he could design and build one to sell more cheaply. So he set about doing it.

He continued working on his synthesiser project as a hobby at Cambridge University, where he originally studied science, but changed to his by-then passion of electrical engineering. A risk of visiting his room was that you might painfully step on a resistor or capacitor, as they got everywhere. That was not surprising giving his living room was also his workshop. By this point he was also working more specifically on the idea of setting up a company to sell his synthesiser designs. He eventually got his first break in the business world when chatting to someone in a pub who was in the music industry. They were inspired enough to give him the few thousand pounds he needed to finance his first startup company, Lynett Systems.

By now he was doing a PhD in electrical engineering, funded by EPSRC, and went on to become a research fellow building both his research and innovation skills. His focus was on signal processing which was a natural research area given his work on synthesisers. They are essentially just computers that generate sounds. They create digital signals representing sounds and allow you to manipulate them to create new sounds. It is all just signal processing where the signals ultimately represent music.

A curving roof made of triangles of glass.
Image by Kang-Rui LENG from Pixabay

However, Mike’s research and ideas were more general than just being applicable to audio. Ultimately, Mike moved away from music, and focussed on using his signal processing skills, and ideas around pattern matching to process images. Images are signals too (resulting from light rather than sound). Making a machine understand what is actually in a picture (really just lots of patches of coloured light) is a signal processing problem. To work out what an image shows, you need to turn those coloured blobs into lines, then into shapes, then into objects that you can identify. Our brains do this seamlessly so it seems easy to us, but actually it is a very hard problem, one that evolution has just found good solutions to. This is what happens whether the image is that captured by the camera of a robot “eye” trying to understand the world or a machine trying to work out what a medical scan shows. 

This is where the need for maths comes in to work out probabilities, how likely different things are. Part of the task of recognising lines, shapes and objects is working out how likely one possibility is over another. How likely is it that that band of light is a line, how likely is it that that line is part of this shape rather than that, and so on. Bayesian statistics gives a way to compute probabilities based on the information you already know (or suspect). When the likelihood of events is seen through this lens, things that seem highly unlikely, can turn out to be highly probably (or vice versa), so it can give much more accurate predictions than traditional statistics. Mike’s PhD used this way of calculating probabilities even though some statisticians disdained it. Because of that it was shunned by some in the machine learning community too, but Mike embraced it and made it central to all his work, which gave his programs an edge.

While Lynett Systems didn’t itself make him a billionaire, the experience from setting up that first company became a launch pad for other innovations based on similar technology and ideas. It gave him the initial experience and skills, but also meant he had started to build the networks with potential investors. He did what great entrepreneurs do and didn’t rest on his laurels with just one idea and one company, but started to work on new ideas, and new companies arising from his PhD research.

Fingerprints

Fingerprint being scanned
Image by alhilgo from Pixabay

He realised one important market for image pattern recognition, that was ripe for dominating, was fingerprint recognition. He therefore set about writing software that could match fingerprints far faster and more accurately than anyone else. His new company, Cambridge Neurodynamics, filled a gap, with his software being used by Police Forces nationwide. That then led to other spin-offs using similar technology

He was turning the computational thinking skills of abstraction and generalisation into a way to make money. By creating core general technology that solved the very general problems of signal processing and pattern matching, he could then relatively easily adapt and reuse it to apply to apparently different novel problems, and so markets, with one product leading to the next. By applying his image recognition solution to characters, for example, he created software (and a new company) that searched documents based on character recognition. That led on to a company searching databases, and finally to the company that made him famous, Autonomy.

Fetch

A puppy fetching a stick
Image from Pixabay

One of his great loves was his dog, Toby, a friendly enthusiastic beast. Mike’s take on the idea of a search engine was fronted by Toby – in an early version, with his sights set on the nascent search engine market, his search engine user interface involved a lovable, cartoon dog who enthusiastically fetched the information you needed. However, in business finding your market and getting the right business model is everything. Rather than competing with the big US search engine companies that were emerging, he switched to focussing on in-house business applications. He realised businesses were becoming overwhelmed with the amount of information they held on their servers, whether in documents or emails, phone calls or videos. Filing cabinets were becoming history and being replaced by an anarchic mess of files holding different media, individually organised, if at all, and containing “unstructured data”. This kind of data contrasts with the then dominant idea that important data should be organised and stored in a database to make processing it easier. Mike realised that there was lots of data held by companies that mattered to them, but that just was not structured like that and never would be. There was a niche market there to provide a novel solution to a newly emerging business problem. Focussing on that, his search company, Autonomy, took off, gaining corporate giants as clients including the BBC. As a hands-on CEO, with both the technical skills to write the code himself and the business skills to turn it into products businesses needed, he ensured the company quickly grew. It was ultimately sold for $11 billion. (The sale led to an accusation of fraud in hte US, but, innocent, he was acquitted of all the charges).

Investing

From firsthand experience he knew that to turn an idea into reality you needed angel investors: people willing to take a chance on your ideas. With the money he made, he therefore started investing himself, pouring the money he was making from his companies into other people’s ideas. To be a successful investor you need to invest in companies likely to succeed while avoiding ones that will fail. This is also about understanding the likelihood of different things,  obviously something he was good at. When he ultimately sold Autonomy, he used the money to create his own investment company, Invoke Capital. Through it he invested in a variety of tech startups across a wide range of areas, from cyber security, crime and law applications to medical and biomedical technologies, using his own technical skills and deep scientific knowledge to help make the right decisions. As a result, he contributed to the thriving Silicon Fen community of UK startup entrepreneurs, who were and continue to do exciting things in and around Cambridge, turning research and innovation into successful, innovative companies. He did this not only through his own ideas but by supporting the ideas of others.

Man on rock staring at the sun between 2 parallel worlds
Image by Patricio González from Pixabay

Mike was successful because he combined business skills with a wide variety of technical skills including maths, electronic engineering and computer science, even bioengineering. He didn’t use his success to just build up a fortune but reinvested it in new ideas, new companies and new people. He has left a wonderful legacy as a result, all the more so if others follow his lead and invest their success in the success of others too.

In memory of a friend

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

From a handful of sand to a fistful of dollars

Where computer chips come from

Sitting at the heart of your computer, mobile phone, smart TV (or even smart toaster) is the microprocessor that makes it all work. These electronic ‘chips’ have millions of tiny electronic circuits on them allowing the calculations needed to make your gizmos work. But it may be surprising to learn that these silicon chips, now a billion pound industry worldwide are in fact mostly made of the same stuff that you find on beaches, namely sand.

A transistor is just like a garden hose with your foot on it

Sand is mostly made of silicon dioxide, and silicon, the second most abundant substance in the earth’s crust, has useful chemical properties as well as being very cheap. You can easily ‘add’ other chemicals to silicon and change its electrical properties, and it’s by using these different forms of silicon that you can make mini switches, or transistors, in silicon chips.

House Hose

A transistor on a chip can be thought of like a garden hose, water flows from the tap (the source) through the hose and out onto the garden (the drain), but if you were to stand on the hose with your foot and block the water flow the watering would stop. An electronic transistor on a chip in its most basic form works like this, but electrical charge rather than water runs through the transistor (in fact the two parts of a transistor are actually called the source and drain). The ‘gate’ plays the part of your foot; this is the third part of the transistor. Applying a voltage to the gate is like putting your foot on and off the hose, it controls whether charge flows through the transistor.

Lots of letter T’s

A billion pound industry made of sand

If you look at a transistor on a chip it looks like a tiny letter T, the top crossbar on the T is the source/drain part (hose) and the upright part of the T is the gate (the foot part). Using these devices you can start to build up logic functions. For example, if you connect the source and drain of two transistors together one after another it can work out the logic AND function. How? Well think of this as a long hose with you and a friend’s foot available. If you stand on the hose no water will flow. If your friend stands on the hose no water will flow. If you both stand on the hose defiantly no water will flow. It is only when you don’t stand on the hose AND your friend also doesn’t stand on the hose that the water flows. So you’ve build a simple logical function.

Printing chips

From such simple logic functions you can build very complex computers, if you have enough of them, and that’s again where silicon comes in. You can ‘draw’ with silicon down to very small sizes. In fact a silicon chip is printed with many different layers. For example, one layer has the patterns for all the sources and drains, the next layer chemically printed on top are the gates, the next the metallic connections between the transistors and so on. These chips take millions of pounds to design and test, but once the patterns are correct it’s easy to stamp out millions of chips. It’s just a big chemical printing press. It’s the fact that you can produce silicon chips efficiently and cheaply with more and more transistors on them each year that drives the technology leaps we see today.

Beautiful silicon

Finally you might wonder how the chip companies protect their chip designs? They in fact protect them by registering the design of the masks they use in the layer printing process. Design registration is normally used to protect works of artistic merit, like company logos. Whether chip masks are quite as artistic doesn’t seem to matter. What does matter is that the chemical printing of silicon and lots of computer scientists have made all today’s computer technology possible. Now there is a beautiful thought to ponder when next on the beach.

– Paul Curzon, Queen Mary University of London

This article was first published on the original CS4FN website.


More on …

Magazines …

You probably won’t be surprised to learn that computer science can now also help improve the creation of computer chips. Computational lithography (literally ‘stone writing’) improves the resolution needed to etch the design of these tiny components onto the wafer thin silicon, using ultraviolet light (photoglithography = ‘stone writing with light’). Here’s a promotional video from ASML about computational lithography.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Mixing Research with Entrepreneurship: Find a need and solve it

A mixing desk
Image by Ida from Pixabay

Becoming a successful entrepreneur often starts with seeing a need: a problem someone has that needs to be fixed. For David Ronan, the need was for anyone to mix and master music but the problem was that of how hard it is to do this. Now his company RoEx is fixing that problem by combining signal processing ans artificial intelligence tools applied to music. It is based on his research originally as a PhD student

Musicians want to make music, though by “make music” they likely mean playing or composing music. The task of fiddling with buttons, sliders and dials on a mixing desk to balance the different tracks of music may not be a musician’s idea of what making music is really about, even though it is “making music” to a sound engineer or producer. However, mixing is now an important part of the modern process of creating professional standard music.

This is in part a result of the multitrack record revolution of the 1960s. Multitrack involves recording different parts of the music as different tracks, then combining them later, adding effects, combining them some more … George Martin with the Beatles pioneered its use for mainstream pop music in the 1960s and the Beach Boys created their unique “Pet Sounds” through this kind of multitrack recording too. Now, it is totally standard. Originally, though, recording music involved running a recording machine while a band, orchestra and/or singers did their thing together. If it wasn’t good enough they would do it all again from the beginning (and again, and again…). This is similar to the way that actors will act the same scene over and over dozens of times until the director is happy. Once happy with the take (or recording) that was basically it and they moved on to the next song to record.

With the advent of multitracking, each musician could instead play or sing their part on their own. They didn’t have to record at the same time or even be in the same place as the separate parts could be mixed together into a single whole later. Then it became the job of engineers and the producer to put it all together into a single whole. Part of this is to adjust the levels of each track so they are balanced. You want to hear the vocals, for example, and not have them drowned out by the drums. At this point the engineer can also fix mistakes, cutting in a rerecording of one small part to replace something that wasn’t played quite right. Different special effects can also be applied to different tracks (playing one track at a different speed or even backwards, with reverb or auto-tuned, for example). You can also take one singer and allow them to sing with multiple versions of themselves so that they are their own backing group, and are singing layered harmonies with themselves. One person can even play all the separate instruments as, for example, Prince often did on his recordings. The engineers and producer also put it all together and create the final sound, making the final master recording. Some musicians, like Madonna, Ariana Grande and Taylor Swift do take part in the production and engineering parts of making their records or even take over completely, so they have total control of their sound. It takes experience though and why shouldn’t everyone have that amount of creative control?

Doing all the mixing, correction and overdubbing can be laborious and takes a lot of skill, though. It can be very creative in itself too, which is why producers are often as famous as the artists they produce (think Quincy Jones or  Nile Rogers, for example). However, not everyone wanting to make their own music is interested in spending their time doing laborious mixing, but if you don’t yet have the skill yourself and cant afford to pay a producer what do you do? 

That was the need that David spotted. He wanted to do for music what instagram filters did for images, and make it easy for anyone to make and publish their own professional standard music. Based in part on his PhD research he developed tools that could do the mixing, leaving a musician to focus on experimenting with the sound itself.

David had spent several years leading the research team of an earlier startup he helped found called AI Music. It worked on adaptive music: music that changes based on what is happening around it, whether in the world or in a video game being played. It was later bought by Apple. This was the highlight of his career to that point and it helped cement his desire to continue to be an innovator and entrepreneur. 

With the help of Queen Mary, where he did his PhD, he therefore decided to set up his new company RoEx. It provides an AI driven mixing and mastering service. You choose basic mixing options as well as have the ability to experiment with different results, so still have creative control. However, you no longer need expensive equipment, nor need to build the skills to use it. The process becomes far faster too. Mixing your music becomes much more about experimenting with the sound: the machine having taken over the laborious parts, working out the optimum way to mix different tracks and produce a professional quality master recording at the end.

David  didn’t just see a need and have an idea of how to solve it, he turned it into something that people want to use by not only developing the technology, but also making sure he really understood the need. He worked with musicians and producers through a long research and development process to ensure his product really works for any musician.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …


Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Why is your Internet so slow?

Red and white lights of cars on a a motorway at night
Image from Pixabay

The Internet is now so much a part of life that, unless you are over 50, it’s hard to remember what the world was like without it. Sometimes we enjoy really fast Internet access, and yet at other times it’s frustratingly slow! So the question is why, and what does this have to do with posting a letter, or cars on a motorway? And how did electronic engineers turn the problem into a business opportunity?.

The communication technology that powers the Internet is built of electronics. The building blocks are called routers, and these convert the light-streams of information that pass down the fibre-optic cables into streams of electrons, so that electronics can be used to switch and re-route the information inside the routers.

Enormously high capacities are achievable, which is necessary because the performance of your Internet connection is really important, especially if you enjoy online gaming or do a lot of video streaming. Anyone who plays online games would be familiar with the problem: opponents apparently popping out of nowhere, or stuttery character movement.

So the question is – why is communicating over a modern network like the Internet so prone to odd lapses of performance when traditional land-line telephone services were (and still are) so reliable? The answer is that traditional telephone networks send data as a constant stream of information, while over the Internet, data is transmitted as “packets”. Each packet is a large group of data bits stuck inside a sort of package, with a header attached giving the address of where the data is going. This is why it is like posting a letter: a packet is like a parcel of data sent via an electronic “postal service”.

But this still doesn’t really answer the question of why Internet performance can be so prone to slow down, sometimes seeming almost to stop completely. To see this we can use another analogy: the flow of packet data is also like the flow of cars on a motorway. When there is no congestion the cars flow freely and all reach their destination with little delay, so that good, consistent performance is enjoyed by the car’s users. But when there is overload and there are too many cars for the road’s capacity, then congestion results. Cars keep slowing down then speeding up, and journey times become horribly delayed and unpredictable. This is like having too many packets for the capacity in the network: congestion builds up, and bad delays – poor performance – are the result.

Typically, Internet performance is assessed using broadband speed tests, where lots of test data is sent out and received by the computer being tested and the average speed of sending data and of receiving it is measured. Unfortunately, speed tests don’t help anyone – not even an expert – understand what people will experience when using real applications like an online game.

Electronic engineering researchers at Queen Mary, University of London have been studying these congestion effects in networks for a long time, mainly by using probability theory, which was originally developed in attempts to analyse games of chance and gambling. In the past ten years, they have been evaluating the impact of congestion on actual applications (like web browsing, gaming and Skype) and expressing this in terms of real human experience (rather than speed, or other technical metrics). This research has been so successful that one of the Professors at Queen Mary, Jonathan Pitts, co-founded a spinout company called Actual Experience Ltd so the research could make a real difference to industry and so ultimately to everyday users.

For businesses that rely heavily on IT, the human experience of corporate applications directly affects how efficiently staff can work. In the consumer Internet, human experience directly affects brand perception and customer loyalty. Actual Experience’s technology enables companies to manage their networks and servers from the perspective of human experience – it helps them fix the problems that their staff and customers notice, and invest their limited resources to get the greatest economic benefit.

So Internet gaming, posting letters, probability theory and cars stuck on motorways are all connected. But to make the connection you first need to study electronic engineering.

– Paul Curzon, Queen Mary University of London.

This article was originally published on the CS4FN website. It was also published in our 2023 Advent Calendar.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …



Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Marc Hannah and the graphics pipeline

Film and projectors
Image by Gerd Altmann from Pixabay

What do a Nintendo games console and the films Jurassic Park, Beauty and the Beast and Terminator II have in common? They all used Marc Hannah’s chips and linked programs for their amazing computer effects..It is important that we celebrate the work of Black Computer Scientists and Marc is one who deserves the plaudits as much as anyone as his work has had a massive effect on the leisure time of everyone who watches movies with special effects or plays video games – and that is just about all of us.

In the early 1980s, with six others, Marc founded Silicon Graphics, becoming its principal scientist. Silicon Graphics was a revolutionary company, pioneering fast computers capable of running the kind of graphics programs on special graphics chips that suddenly allowed the film industry to do amazing special effects. Those chips and linked programs were designed by Marc.

Now computers and games consoles have special graphics chips that do fast graphics processing as standard, but it is Marc and his fellow innovators at Silicon Graphics who originally made it happen.

It all started with his work with James Clark on a system called the Geometry Engine while they were at Stanford. Their idea was to create chips that do all the maths needed to do sophisticated manipulation of imagery. VLSI (Very Large scale Integration), whereby computers were getting smaller and fitting on a chip was revolutionising computer design. Suddenly a whole microprocessor could be put on a single chip because tens of thousands (now billions) of transistors could be put on a single slice of silicon. They pioneered the idea of using VLSI for creating 3-D computer imagery, rather than just general-purpose computers, and with Silicon Graphics they turned their ideas into an industrial reality that changed both film and games industries for ever.

Silicon Graphics was the first company to create a VLSI chip in this way, not to be a general-purpose computer, but just to manipulate 3-D computer images.

A simple 3D image in a computer might be implemented as the vertices (corners) of a series of polygons. To turn that into an image on a flat screen needs a series of mathematical manipulations of those points’ coordinates to find out where they end up in that flat image. What is in the image depends on the position of the viewer and where light is coming from, for example. If the object is solid you also need to work out what is in front, so seen, and what behind, so not. Each time the object, viewer or light source moves, the calculations need to be redone. It is done as a series of passes doing different geometric manipulations in what is called a geometry pipeline and it is these calculations they focussed on. They started by working out which computations had to be really fast: the ones in the inner most loops of the code that did this image processing, so was executed over and over again. This was the complex code that meant processing images took hours or days because it was doing lots of really complex calculation. Instead of trying to write faster code though, they instead created hardware, ie a VLSI chip, to do the job. Their geometry pipeline did the computation in a lightening fast way as it was avoiding all the overhead of executing programs and instead implementing the calculations that slowed things down directly in logic gates that did all that crucial maths very directly and so really quickly.

The result was that their graphic pipeline chips and programs that worked with them became the way that CGI (computer generated imagery) was done in films allowing realistic imagery, and were incorporated into games consoles too, allowing for ever more realistic looking games.

So if some amazing special effects make some monster appear totally realistic this Halloween, or you get lost in the world of a totally realistic computer game, thank Marc Hannah, as his graphics processing chips originally made it happen.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.