The first Internet concert

Severe Tire Damage
Severe Tire Damage. Image by Strubin, CC BY-SA 4.0 via Wikimedia Commons

Which band was the first to stream a concert live over the Internet? The Rolling Stones decided, in 1994, it should be them. After all, they were one of the greatest, most innovative rock bands of all time. A concert from their tour of that year, in Dallas, was therefore broadcast live. Mick Jagger addressed the world not just the 50,000 packed into the stadium welcoming the world with “I wanna say a special welcome to everyone that’s, climbed into the Internet tonight and, uh, has got into the MBone. And I hope it doesn’t all collapse.” Unknown to them, when planning this publicity coup, another band had got there first: a band of Computer Scientists from Xerox PARC, DEC and Apple, the research centres responsible for many innovations including many of the ideas around graphical user interfaces, networks and multimedia internet had played live on the Internet the year before!

The band which actually went down in history was called Severe Tire Damage. Its members were Russ Haines and Mark Manasse (from DEC), Steven Rubin (a Computer Aided design expert from Apple) and Mark Weiser (famous for the ideas behind calm computing, from Xerox PARC). They were playing a concert at Xerox PARC on  June 24, 1993. At the time researchers there were working on a system called MBone which provided a way to do multimedia over the Internet for the first time. Now we take that for granted (just about everyone with a computer or phone doing Zoom and Teams calls, for example) but then the Internet was only set up for exchanging text and images from one person to another. MBone, short for multicast backbone, allowed packets of data of any kind (so including video data) from one source to be sent to multiple Internet addresses rather than just to one address. Sites that joined the MBone could send and receive multimedia data, including video, live to all the others in one broadcast. This meant for the first time, video calls between multiple people over the Internet were possible. They needed to test the system, of course, so set up a camera in front of Severe Tire Damage and live-streamed their performance to other researchers on the nascent MBone round the world (research can be fun at the same time as being serious!). Possibly there was only a single Australian researcher watching at the time, but it is the principle that counts!

On hearing about the publicity around the Rolling Stones concert, and understanding the technology of course, they decided it was time for one more live internet gig to secure their place in history. Immediately, before the Rolling Stones started their gig, Severe Tire Damage broadcast their own live concert over the MBone to all those (including journalists) waiting for the main act to arrive online. In effect they had set themselves up as an Internet un-billed opening act for the Stones even though they were nowhere near Dallas. Of course that is partly the point, you no longer had to all be on one place to be part of the same concert. So, the Rolling Stones, sadly for them, weren’t even the first to play live over the Internet on that particular day, never mind ever!

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Let the brain take the strain

Cockpit controls
Image by Michi S from Pixabay

Whenever humans have complicated, repetitive jobs to do, designers set to work making computer systems that do those jobs automatically. Autopilot systems in airplanes are a good example. Flying a commercial airliner is incredibly complex, so a computer system helps the pilots by doing a lot of the boring, repetitive stuff automatically. But in any automated system, there has to be a balance between human and computer so that the human still has ultimate control. It’s a strange characteristic of human-computer interaction: the better an automated program, the more its users rely on it, and the more dangerous it can be.

The problem is that the unpredictable always happens. Automated systems run into situations the designers haven’t anticipated, and humans are still much better at dealing with the unexpected. If humans can’t take back control from the system, accidents can happen. For example, some airplanes used to have autopilots that took control of a landing until the wheels touched the ground. But then, one rainy night, a runway in Warsaw was so wet that the plane began skidding along the runway when it touched down. The skid was so severe that the sensors never registered the touchdown of the plane, and so the pilots couldn’t control the brakes. The airplane only stopped when it had overshot the runway. The designers had relied so much on the automation that the humans couldn’t fix the problem.

Many designers now think it’s better to give some control back to the operators of any automated system. Instead of doing everything, the computer helps the user by giving them feedback. For example, if a smart car detects that it’s too close to the car ahead of it, the accelerator becomes more difficult to press. The human brain is still much better than any computer system at coming up with solutions to unexpected situations. Computers are much better off letting our brains do the tricky thinking.

– Paul Curzon, Queen Mary University of London


This article was first published on the original CS4FN website and a copy is available on page 19 of Issue 15 of the CS4FN magazine, which you can download as a PDF by clicking on the panel below. All of our previous issues are free to download as PDFs here.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Margaret Hamilton: Apollo Emergency! Take a deep breath, hold your nerve and count to 5

Buzz Aldrin standing on the moon
Buzz Aldrin standing on the moon. Image by Neil Armstrong, NASA via Wikimedia Commons – Public Domain

You have no doubt heard of Neil Armstrong, first human on the moon. But have you heard of Margaret Hamilton? She was the lead engineer, responsible for the Apollo mission software that got him there, and ultimately for ensuring the lunar module didn’t crash land due to a last minute emergency.

Being a great software engineer means you have to think of everything. You are writing software that will run in the future encountering all the messiness of the real world (or real solar system in the case of a moon landing). If you haven’t written the code to be able to deal with everything then one day the thing you didn’t think about will bite back. That is why so much software is buggy or causes problems in real use. Margaret Hamilton was an expert not just in programming and software engineering generally, but also in building practically dependable systems with humans in the loop. A key interaction design principle is that of error detection and recovery – does your software help the human operators realise when a mistake has been made and quickly deal with it? This, it turned out, mattered a lot in safely landing Neil Armstrong and Buzz Aldrin on the moon.

As the Lunar module was in its final descent dropping from orbit to the moon with only minutes to landing, multiple alarms were triggered. An emergency was in progress at the worst possible time. What it boiled down to was that the system could only handle seven programs running at once but Buzz Aldrin had just set an eighth running. Suddenly, the guidance system started replacing the normal screens by priority alarm displays, in effect shouting “EMERGENCY! EMERGENCY”! These were coded into the system, but were supposed never to be shown, as the situations triggering them were supposed to never happen. The astronauts suddenly had to deal with situations that they should not have had to deal with and they were minutes away from crashing into the surface of the moon.

Margaret Hamilton was in charge of the team writing the Apollo in-flight software, and the person responsible for the emergency displays. She was covering all bases, even those that were supposedly not going to happen, by adding them. She did more than that though. Long before the moon landing happened she had thought through the consequences of if these “never events” did ever happen. Her team had therefore also included code in the Apollo software to prioritise what the computer was doing. In the situation that happened, it worked out what was actually needed to land the lunar module and prioritised that, shutting down the other software that was no longer vital. That meant that despite the problems, as long as the astronauts did the right things and carried on with the landing, everything would ultimately be fine.

Margaret Hamilton
Margaret Hamilton Image by Daphne Weld Nichols, CC BY-SA 3.0 via Wikimedia Commons

There was still a potential problem though, When an emergency like this happened, the displays appeared immediately so that the astronauts could understand the problem as soon as possible. However, behind the scenes the software itself that was also dealing with them, by switching between programs, shutting down the ones not needed. Such switchovers took time In the 1960s Apollo computers as computers were much slower than today. It was only a matter of seconds but the highly trained human astronauts could easily process the warning information and start to deal with it faster than that. The problem was that, if they pressed buttons, doing their part of the job continuing with the landing, before the switchover completed they would be sending commands to the original code, not the code that was still starting up to deal with the warning. That could be disastrous and is the kind of problem that can easily evade testing and only be discovered when code is running live, if the programmers do not deeply understand how their code works and spend time worrying about it.

Margaret Hamilton had thought all this through though. She had understood what could happen, and not only written the code, but also come up with a simple human instruction to deal with the human pilot and software being out of synch. Because she thought about it in advance, the astronauts knew about the issue and solution and so followed her instructions. What it boiled down to was “If a priority display appears, count to 5 before you do anything about it.” That was all it took for the computer to get back in synch and so for Buzz Aldrin and Neil Armstrong to recover the situation, land safely on the moon and make history.

Without Margaret Hamilton’s code and deep understanding of it, we  would most likely now be commemorating the 20th July as the day the first humans died on the moon, rather than being the day humans first walked on the moon.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pots fixing problematic acoustics

Surface waves
Surface Waves, Image by Roger McLassus, CC BY-SA 3.0 via Wikimedia Commons.

Pots are buried in the walls of medieval churches and monasteries across Europe: in the UK, Sweden, Denmark and Serbia. Why? Are they just a weird form of decoration? Actually, they are there to fix problematic acoustics.

The problem

First of all, what do we mean by ‘problematic’ acoustics? When sound waves move around a room they reflect off the walls in a way that creates strange sound effects when they meet their reflections.

It happens because of what are called ‘standing waves’. Imagine dropping a pebble into a bath. The ripples create patterns in the water where they interfere with those that have bounced off the sides. As the two ripples pass in opposite directions if the movement pushing the molecule up from one ripple exactly cancels out the movement pushing down from the other and keeps doing so, then at that point the molecules remain still. On either side the two ripples reinforce each other rather than cancelling out giving the peaks and troughs of the combined wave. The result is the ripples appear to stop moving forward: a standing wave.

Sound waves are like water waves except that the air molecules vibrate from side to side rather than up and down as water molecules do. The same effects therefore happen when sound waves meet and standing waves can form. This is bad for two reasons. Standing waves take more time to die away after the sound source has been silenced than other sounds. Worse, the sound’s volume varies around the room depending on whether it is a point where the waves cancel out (no sound) or where they enhance each other (loud). That’s ‘problematic’ acoustics!

Standing Wave.By Lucas Vieira – Own work, Public Domain, from WIKIMEDIA

These acoustic problems ultimately come about because of what is known as ‘resonance’. That is where a sound repeatedly bounces back and forth across a space at a particular frequency. Frequencies that are directly tied to the room’s dimensions cause most problems. Called the ‘resonant frequencies’ they involve a whole number of wave troughs and crests fitting in the space between the walls. That is what leads to standing waves as the original and reflected wave coincide exactly. The lowest resonant frequency of a wave is also called the ‘fundamental frequency’. It’s the one where a single wave (a single trough and crest) fits in the space.

There are three different types of resonances developed in a room from sounds bouncing of the walls: called axial, tangential and oblique modes. Axial modes result from a sound bouncing back and forth between two facing walls. Tangential ones happen when the waves reflect around all four walls. Oblique modes are the most complicated and result from sound bouncing off the roof and floor too. Of all these, it turns out the worst are the axial modes. To improve the acoustics of a room you need to absorb the sounds at these resonant frequencies. But how?

The solution

OK, now we know the problem, but how do we deal with it? A solution is the ‘Helmholtz resonator’, named after a device created by Hermann von Helmholtz in the 1850s as part of his studies to identify the ‘tones’ of sounds. A Helmholtz resonator is just the phenomenon of air resonating in a cavity. It is the way you get a tone from blowing across the mouth of an empty bottle. The frequency of the tone is the resonant frequency of the bottle. If you change the volume of the air cavity or the length or diameter of the neck of the bottle you change its resonant frequency and so the tone.

A Helmholtz resonator actually absorbs sound at its resonant frequency and at a small range of nearby frequencies. This happens because when a sound strikes the resonator’s opening, the air mass in the neck starts to vibrate strongly at that resonant frequency and tries to leave. That makes the pressure of the air in the cavity lower than the outside. As a result it draws the air back into the cavity. This process repeats but energy is lost each time, which causes the wave, of this particular resonant frequency, to dissipate. That means that specific sound is absorbed by the resonator. Helmholtz resonators also reradiate the sound that is not absorbed in all directions from the opening. That means any energy that wasn’t absorbed is spread around the room and that improves the room’s acoustics too.

So back to those pots in the walls of medieval churches. What are they for? Well they would have acted as Helmholtz resonators so they presumably were designed to remove low-frequency sounds and so correct the acoustic of the vaults and domes. Ashes have been found in some of the pots. That would have increased the range of sound frequencies absorbed as well as helped spread the unabsorbed sound. St Andrew’s Church in Lyddington, Rutland, built in the 14th Century, has some of the finest examples of this kind of acoustic jars in the UK. Helmholtz resonators obviously predate Helmholtz, actually going back to the ancient Greeks and Romans. The pots in churches are thought to be based on the ideas of Roman architect Vitruvius. He discussed the use of resonant jars in the design of amphitheatres to improve the clarity of the speakers’ voices.

Designers of acoustic spaces like concert halls now use a variety of techniques to fix acoustic problems including Helmholtz resonators, resonant panels and tube traps. They’re all efficient ways for absorbing low-frequency sounds. Helmholtz resonators though have the particular advantage of being able to treat localized ‘problematic’ frequencies.

Those church designers were apparently rather sophisticated acoustic engineers. They had to be, of course. It would have been a little unfortunate to build a church so everyone could hear the word of God, only to have those words resonate with the walls rather than with the congregation.

– Dimitrios Giannoulis, Queen Mary University of London


Magazines …

This article was originally published on the CS4FN archive website and can also be found on pages 8 and 9 of issue 4 of Audio! Mad About Music Technology, our series of magazines celebrating sound and tech.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


You cannot be serious! …Wimbledon line calls go wrong

Image by Felix Heidelberger from Pixabay (cropped)

The 2025 tennis championships are the first time Wimbledon has completely replaced their human line judges with an AI vision and decision system, Hawk-Eye. After only a week it caused controversy, with the system being updated, after it failed to call a glaringly out ball in a Centre Court match between Brit Sonay Kartal and Anastasia Pavlyuchenkova. Apparently it had been switched off by mistake mid-game. This raises issues inherent in all computer technology replacing humans: that they can go wrong, the need for humans-in-the-loop, the possibility of human error in their use, and what you do when they do go wrong.

Perhaps because it is a vision system rather than generative AI there has been little talk of whether Hawk-Eye is 100% accurate or not. Vision systems do not hallucinate in the way generative AI does, but they are still not infallible. The opportunity for players to appeal has been removed, however: in the original way Hawk-Eye was used humans made the call and players could ask for Hawk-Eye to check. Now, Hawk-Eye makes a decision and basically that is it. A picture is shown on screen of a circle relative to the line, generated by Hawk-Eye to ‘prove’ the ball was in or out as claimed. It is then taken as gospel. Of course, it is just reflecting Hawk-Eye’s decision – what it “saw” – not reality and not any sort of actual separate evidence. It is just a visual version of the call shouted. However, it is taken as though it is absolute proof with no argument possible. If it is aiming to be really, really dependable then Hawk-Eye will have multiple independent systems sensing in different ways and voting on the result – as that is one of the ways computer scientists have invented to program dependability. However, whether it is 100% accurate isn’t really the issue. What matters is whether it is more accurate, making fewer mistakes, than human line judges do. Undoubtedly it is, so is therefore an improvement and some uncaught mistakes are not actually the point.

However, the mistake in this problem call was different. The operators of the system had switched it off mistakenly mid-match due to “human error”. That raises two questions. First, why was it designed to that a human could accidentally turn it off mid-match – don’t blame that person as it should not have been possible in the first place. Fix the system so it can’t happen again. That is what within a day the Lawn Tennis Association claim to have done (whether resiliently remains to be seen).

However, the mistake begs another question. Wimbledon had not handed the match over to the machines completely. A human umpire was still in charge. There was a human in the loop. They, however, had no idea the system was switched off we were told until the call for a ball very obviously out was not made. If that is so, why not? Hawk-Eye supposedly made two calls of “Stop”. Was that its way of saying “I am not working so stop the match”? If it was such an actual message to the umpire it is not a very clear way to make it, and guarantees to be disruptive. It sounds a lot like a 404 error message, added by a programmer for a situation that they do not expect to occur!

A basic requirement of a good interactive system is that the system state is visible – that it is not even switched on should have been totally obvious in the controls the umpire had well before the bad call. That needs to be fixed too, just in case there is still a way Hawk-Eye can still be switched off. It begs the question of how often has the system been accidentally switched off, or powered down temporally for other reasons, with no one knowing, because there was no glaringly bad call to miss at the time.

Another issue is the umpire supposedly did follow the proper procedure which was not to just call the point (as might have happened in the past given he apparently knew “the ball was out!”) but instead had the point replayed. That was unsurprisingly considered unfair by the player who lost a point they should have won. Why couldn’t the umpire make a decision on the point? Perhaps, because humans are no longer trusted at all as they were before. As suggested by Pavlyuchenkova there is no reason why there cannot be a video review process in place so that the umpire can make a proper decision. That would be a way to add back in a proper appeal process.

Also, as was pointed out, what happens if the system fully goes down, does Wimbledon now have to just stop until Hawk-Eye is fixed: “AI stopped play”. We have lots of situations over many decades as well as recently of complex computer systems crashing. Hawk-Eye is a complex system so problems are likely possible. Programmers make mistakes (and especially when doing quick fixes to fix other problems as was apparently just done). If you replace people by computers, you need a reliable and appropriate backup that can kick into place immediately from the outset. A standard design principle is that programs should help avoid humans making mistakes, help them quickly detect them when they do and help them recover.

A tennis match is not actually high stakes by human standards. No one dies because of mistakes (though a LOT of money is at stake), but the issues are very similar in a wide range of systems where people can die – from control of medical devices, military applications, space, aircraft and nuclear power plant control…all of which computers are replacing humans. We need good solutions, and they need to be in place before something goes wrong not after. An issue as systems are more and more automated is that the human left in the loop to avoid disaster has more and more trouble tracking what the machine is doing as they do less and less, so making it harder to step in and correct problems in a timely way (as was likely the case with the Wimbledon umpire). The humans need to not just be a little bit in the loop but centrally so. How you do that for different situations is not easy to work out but as tennis has shown it can’t just be ignored. There are better solutions than Wimbledon are using but to even consider them you have to first accept that computers do make mistakes so know there is a problem to be solved.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos