The Internet is now so much a part of life that, unless you are over 50, it’s hard to remember what the world was like without it. Sometimes we enjoy really fast Internet access, and yet at other times it’s frustratingly slow! So the question is why, and what does this have to do with posting a letter, or cars on a motorway? And how did electronic engineers turn the problem into a business opportunity?.
The communication technology that powers the Internet is built of electronics. The building blocks are called routers, and these convert the light-streams of information that pass down the fibre-optic cables into streams of electrons, so that electronics can be used to switch and re-route the information inside the routers.
Enormously high capacities are achievable, which is necessary because the performance of your Internet connection is really important, especially if you enjoy online gaming or do a lot of video streaming. Anyone who plays online games would be familiar with the problem: opponents apparently popping out of nowhere, or stuttery character movement.
So the question is – why is communicating over a modern network like the Internet so prone to odd lapses of performance when traditional land-line telephone services were (and still are) so reliable? The answer is that traditional telephone networks send data as a constant stream of information, while over the Internet, data is transmitted as “packets”. Each packet is a large group of data bits stuck inside a sort of package, with a header attached giving the address of where the data is going. This is why it is like posting a letter: a packet is like a parcel of data sent via an electronic “postal service”.
But this still doesn’t really answer the question of why Internet performance can be so prone to slow down, sometimes seeming almost to stop completely. To see this we can use another analogy: the flow of packet data is also like the flow of cars on a motorway. When there is no congestion the cars flow freely and all reach their destination with little delay, so that good, consistent performance is enjoyed by the car’s users. But when there is overload and there are too many cars for the road’s capacity, then congestion results. Cars keep slowing down then speeding up, and journey times become horribly delayed and unpredictable. This is like having too many packets for the capacity in the network: congestion builds up, and bad delays – poor performance – are the result.
Typically, Internet performance is assessed using broadband speed tests, where lots of test data is sent out and received by the computer being tested and the average speed of sending data and of receiving it is measured. Unfortunately, speed tests don’t help anyone – not even an expert – understand what people will experience when using real applications like an online game.
Electronic engineering researchers at Queen Mary, University of London have been studying these congestion effects in networks for a long time, mainly by using probability theory, which was originally developed in attempts to analyse games of chance and gambling. In the past ten years, they have been evaluating the impact of congestion on actual applications (like web browsing, gaming and Skype) and expressing this in terms of real human experience (rather than speed, or other technical metrics). This research has been so successful that one of the Professors at Queen Mary, Jonathan Pitts, co-founded a spinout company called Actual Experience Ltd so the research could make a real difference to industry and so ultimately to everyday users.
For businesses that rely heavily on IT, the human experience of corporate applications directly affects how efficiently staff can work. In the consumer Internet, human experience directly affects brand perception and customer loyalty. Actual Experience’s technology enables companies to manage their networks and servers from the perspective of human experience – it helps them fix the problems that their staff and customers notice, and invest their limited resources to get the greatest economic benefit.
So Internet gaming, posting letters, probability theory and cars stuck on motorways are all connected. But to make the connection you first need to study electronic engineering.
What was the first technology for recording music: CDs? Records? 78s, The phonograph? No. Trained songbirds came before all of them.
Composer, musician, engineer and visiting fellow at Goldsmiths University, Sarah Angliss, usually has a robot on stage performing live with her. These robots are not slick high tech cyber-beings, but junk modelled automata. One, named Hugo, sports a spooky ventriloquist dolls head! Sarah builds and programs her robots, herself.
She is also a sound historian, and worked on a Radio 4 documentary, ‘The Bird Fancyer’s Delight‘, uncovering how birds have been used to provide music across the ages. During the 1700’s people trained songbirds to sing human invented tunes in their homes. You could buy special manuals showing how to train your pet bird. By playing young birds a tune over and over again, and in the absence of other birds to put them right, they would adopt that song as their own. Playing the recorder was one way to train them, but special instruments were also invented to do the job automatically.
With the invention of the phonograph, home songbird popularity plummeted but it didn’t completely die out. Blackbirds, thrushes, canaries, budgies, bullfinches and other songbirds have continued to be schooled to learn songs that they would never sing in the wild.
Adapted from an Image by Clker-Free-Vector-Images from Pixabay
Conjure up a stereotypical image of a scientist and they likely will have a white coat. If not brandishing test tubes, you might imagine them working with mice scurrying around a maze. In future the scientists may well be doing a lot of programming, and the mice for their part will be scurrying around in their own virtual world wearing Virtual Reality goggles.
Scientists have long used mazes as away to test the intelligence of mice, to the point it has entered popular culture as a stereotypical thing that scientists in white lab coats do. Mazes do give ways to test intelligence of animals, including exploring their memory and decision making ability in controlled experiments. That can ultimately help us better understand how our brains work too, and give us a better understanding of intelligence. The more we understand animal cognition as well as human cognition, the more computer scientists can use that improved understanding to create more intelligent machines. It can also help neurobiologists find ways to improve our intelligence too.
Flowers for Algernon is a brilliant short story and later novel based on the idea, there using experiments on mice and humans to test surgery intended to improve intelligence. In a slightly different take on mice-maze experiments, Douglas Adams, in ‘The Hitchhikers Guide to the Galaxy’, famously claimed that the mice were actually pan-dimensional beings and these experiments were really incredibly subtle experiments the mice were performing on humans. Whatever the truth of who is experimenting on who, the experiments just took a great leap forward because scientists at Northwestern University have created Virtual Reality goggles for their mice.
For a long time researchers at Northwestern have used a virtual reality version of maze experiments, with mice running on treadmills with screens around them projecting what the researchers want them to see, whether mazes, predators or prey. This has the advantage of being much easier to control than using physical mazes, and as the mice are actually stationary the whole time , just running on a treadmill, brain-scanning technology can be used to see what is actually happening in their brains while facing these virtual trials. The problem though is that the mice, with their 180 degree vision, can still see beyond the edges of the screens. The screens also give no sense of 3 dimensions, when like us the mice naturally see in 3D. As the screens are not fully immersive, they are not fully natural and that could affect the behaviour of the mice and so invalidate the experimental results.
That is why the Northwestern researchers invented the mousey VR googles, the idea being that they would give a way to totally immerse the mice in their online world, and so improve the reliability of the experiments. In the current version the goggles are not actually worn by the mice, as they are still too heavy. Instead, the mouse’s head is held in place really close to them, but with the same effect of total immersion. Future versions may be small enough for the mice to wear them though.
The scientists have already found that the mice react more quickly to events, like the sight of a predator, than in the old set-up, suggesting that being able to see they were in a lab was affecting their behaviour. Better still, there are new kinds of experiment that can be done with this set up. In particular, the researchers have run experiments where an aerial predator like an owl appears from above the mice in a natural way. Mounting screens above them previously wasn’t possible as it got in the way of the brain scanning equipment. What does happen when a virtual owl appears? The mice either run faster or freeze, just as in the wild. This means that by scanning their brains while this is happening, how their perception of the threat works can be investigated, as well as how decision-making is taking place at the level of their brain activity. The scientists also intend to run similar experiments where the mouse is the predator, for example chasing a virtual fly too. Again this would not have been possible previously.
That in any case is what we think the purpose of these new experiments is. What new and infinitely subtle experiments it is allowing the pan-dimensional mice to perform on us remains to be seen.
Ludwig Wittgenstein is one of the most important philosophers of the 20th century. His interest was in logic and truth, language, meaning and ethics. As an aside he made contributions to logical thinking that are a foundation of computing. He popularised truth tables, a way to evaluate logical expressions, and invented the modern idea of tautology. His life shows that you do not have to set out with your life planned out to ultimately do great things.
Wittgenstein was born in Austria, of three-quarters Jewish descent, and actually went to the same school as Hitler at the same time, as they were the same age to within a week. Had he still been in Austria at the time of World War II he would undoubtedly have been sent to a concentration camp. Hitler presumably would not have thought much of him had he known more about him at school. Not only did he have a Jewish background, he was bisexual: it is thought he fell in love four times, once with a woman and three times with men.
Interested, originally, in flying and so aeronautic engineering he studied how kites fly in the upper atmosphere for his PhD in Manchester: flying the kites in the Peak District. He moved on to the study of propellors and designed a very advanced propellor that included mini jet engines on the propellor blades themselves. Studying propellors led him to an interest in advanced mathematics and then ultimately to the foundations of mathematics – a course about which, years later, he taught at Cambridge University that Alan Turing attended. Turing was teaching a course with the same title but from a completely different point of view at the time. His interest in the foundations of maths led to him thinking about what facts are, how they relate to thoughts, language and logic and what truth really was. However, World War I then broke out. During the war he fought for the Austro-Hungarian army, originally safe behind the lines but at his own request he was sent to the Russian Front. He was ultimately awarded medals for bravery. While on military leave towards the end of the war he completed the philosophical work that made him famous, the Tractatus Logico-Philosophicus. After the war though he went to rural Austria and worked as a monastery gardener and then as a primary school teacher. His sister suggested this was “like using a precision instrument to open crates”, though as he got into trouble for being violent in his punishments of the children the metaphor probably isn’t very good as he doesn’t sound like a great teacher and as a teacher he was more like a very blunt instrument.
In his absence, his fame in academia grew, however, and so eventually he returned to Cambridge, finally gained a PhD and ultimately became a fellow and then a Professor of Philosophy. By the time World War II broke out he was teaching philosophy in Cambridge but felt this was the wrong thing to be doing during a war, so despite now being a world famous philosopher went to work as a porter in Guy’s hospital, London.
His philosophical work was ground breaking mainly because of his arguments about language and meaning with respect to truth. However, a small part of has work has a very concrete relevance to computing. His thinking about truth and logic had led him to introduce the really important idea of a tautology as a redundant statement in logic. The ancient Greeks used the word but in a completely different sense of something made “true” just because it was said more than once, so argued to be true in a rhetorical sense. In computational terms Wittgenstein’s idea of a tautology is a logical statement about propositions that can be simplified to true. Propositions are just basic statements that may or may not be true, such as “The moon is made of cheese”. An example of a tautology is (a OR NOT(a)) where (a) is a variable that stands for a proposition so something that is either true or false. Putting in the concrete propositions “The moon is made of cheese” we get:
“(The moon is made of cheese) OR NOT (The moon is made of cheese)”
or in other words the statement
“The moon is made of cheese OR The moon is NOT made of cheese”
Logically, this is always true, whatever the moon is made of. “The moon is made of cheese” can be either true or false. Either it is made of cheese or not but either way the whole statement is true whatever the truth of the moon as one side or other of the OR is bound to be true. The statement is equivalent to just saying
“TRUE”
In other words, the original statement always simplifies to truth. More than that, whatever proposition you substitute in place of the statement “The moon is made of cheese” it still simplifies to true eg if we use the statement instead “Snoopy fought the Red Baron” then we get
“Snoopy fought the Red Baron OR NOT (Snoopy fought the Red Baron)”
Again, whatever the truth about Snoopy, this is a true statement. It is true whatever statement we substitute for (a) and whether it is true or false: (a OR NOT(a)) is a tautology guaranteed to be true by its logical structure, not by the meaning of the words of the propositions substituted in for a.
As part of this work Wittgenstein used truth tables, and is often claimed to have invented them. He certainly popularised them as a result of his work becoming so famous. However, Charles Sanders Peirce used truth tables first, 30 years earlier. The latter was a philosopher too, know as the “Father of Pragmatism” (so hopefully that means he wouldn’t have minded Wittgenstein getting all the credit!)
A truth table is just a table that includes as rows all the combinations of true and false values of the variables in logical expressions together with an answer for those values. For example a truth table for the operator NOT, so telling us in all situations what (NOT a) means, is:
a
NOT a
TRUE
FALSE
FALSE
TRUE
A truth table for the NOT operator. Reading along the rows, IF a is TRUE then (NOT a) is FALSE; IF a is FALSE then (NOT a) is TRUE. Image by CS4FN
The first thing that is important about truth tables is that they give very clear and simple meaning (or “semantics”) to logical operators (like AND, OR and NOT) and so of statements asserting facts logically. Computationally, they make precise what the logical operators do, as the above table for NOT does. This of course matters a lot in programs where logical operators control what the program does. It also matters in hardware which is built up from circuits representing the logical operations. They provide the basis for understanding what both programs and hardware do.
The following is the truth table for the logical OR operator: again the last column gives the meaning of the operator so the answer of computing the logical or operation. This time there are two variables (a) and (b) so four rows to cover the combinations.
a
b
a OR b
TRUE
TRUE
TRUE
TRUE
FALSE
TRUE
FALSE
TRUE
TRUE
FALSE
FALSE
FALSE
A truth table for the logical OR operator. Reading along the rows, IF a is TRUE and b is TRUE then (a OR b) is TRUE; IF a is TRUE and b is FALSE then (a OR b) is TRUE; IF a is FALSE and b is TRUE then (a OR b) is TRUE; IF a is FALSE and b is FASLE then (a OR b) is FALSE; Image by CS4FN
Truth tables can be used to give more than just meaning to operators, they can be used for doing logical reasoning; to compute new truth tables for more complex logical expressions, including checking if they are tautologies. This is the basis of program verification (mathematically proving a program does the right thing) and similarly hardware verification. Let us look at (a OR (NOT a)). We make a column for (a) and then a second column gives the answer for (NOT a) from the NOT truth table. Adding a third column we then look up in the OR truth table the answers given the values for (a) and (NOT a) on each row. For example, if a is TRUE then NOT a is FALSE. Looking up the row for TRUE/FALSE in the OR table we see the answer is TRUE so that goes in the answer column for (a OR (NOT a)). The full table is then:
a
NOT a
a OR (NOT a)
TRUE
FALSE
TRUE
FALSE
TRUE
TRUE
A truth table for the a OR NOT a. Reading along the rows, IF a is TRUE then (a OR (NOT a)) is TRUE; IF a is FALSE then (a OR (NOT a)) is TRUE; Image by CS4FN
Truth tables therefore give us an easy way to see if a logical expression is a tautology. If the answer column has TRUE as the answer for every row, as here, then the expression is a tautology. Whatever the truth of the starting fact a, the expression is always true. It has the same truth table as the expression TRUE (a) where TRUE is an operator which gives answer true whatever its operand.
a
TRUE
TRUE
TRUE
FALSE
TRUE
A truth table for the TRUE operator. Whatever its operand it gives answer TRUE. Image by CS4FN
We can do a similar thing for (a AND (NOT a)). We need the truth table for AND to do this.
a
b
a AND b
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
FALSE
TRUE
FALSE
FALSE
FALSE
FALSE
A truth table for the logical AND operator. Image by CS4FN
We fill in the answer column based on the values from the (a) column and the (NOT a) column looking up the answer in the truth table for AND.
a
NOT a
a AND (NOT a)
TRUE
FALSE
FALSE
FALSE
TRUE
FALSE
A truth table for the a AND (NOT a). Reading along the rows, IF a is TRUE then a AND (NOT a) is FALSE; IF a is FALSE then a AND (NOT a) is FALSE; Image by CS4FN
This shows that it is not a tautology as not all rows have answer TRUE. In fact, we can see from the table that this actually simplifies to FALSE. It can never be true whatever the facts involved as both (a) and (NOT a) are never true about any proposition (a) at the same time.
Here is a slightly more complicated logical expression to consider: ((a AND b) IMPLIES a). Is this a tautology? We need the truth table for IMPLIES to work this out:
a
b
a IMPLIES b
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
FALSE
TRUE
TRUE
FALSE
FALSE
TRUE
A truth table for the logical IMPLIES logical operator. Image by CS4FN
When we look up the values from the (a AND b) column and the (a) column in the IMPLIES truth table, we get the answers for the full expression ((a AND b) IMPLIES a) and find that it is a tautology as the answer is always true:
a
b
a AND b
a
(a AND b) IMPLIES a
TRUE
TRUE
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
TRUE
FALSE
FALSE
TRUE
FALSE
FALSE
FALSE
FALSE
TRUE
A truth table for the logical expression (a AND b) IMPLIES a. Image by CS4FN
Using the same kind of approach we can use truth tables to check if two expressions are equivalent. If they give the same final column of answers for the same inputs then they are interchangeable. Let’s look at (b OR (NOT a)).
a
b
NOT a
(b OR (NOT a))
TRUE
TRUE
FALSE
TRUE
TRUE
FALSE
FALSE
FALSE
FALSE
TRUE
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
A Truth table for the logical expression (b OR (NOT a)). Image by CS4FN
This gives exactly the same answers in the final column as the truth table for IMPLIES above, so we have just shown that:
(a IMPLIES b) IS EQUIVALENT TO (b OR (NOT a))
We have proved a theorem about logical implication. (a IMPLIES b) has the same meaning as, so is interchangeable with, (b OR (NOT a)). All tautologies are interchangeable of course as they are all equivalent in their answers to TRUE. If we give a truth table for IS EQUIVALENT TO we could even show equivalences like the above are tautologies!
Tautologies, and equivalences, once proved, can also be the basis of further reasoning. Any time we have in a logical expression (a IMPLES b), for example, we can swap it for (b OR (NOT a)) knowing they are equivalent.
Truth tables helped Wittgenstein think about arguments and deduction of facts using rules. In particular, he decided special rules that other philosophers suggested should be used in deduction, were not necessary, as such. Deduction instead works simply from the structure of logic that means logical statements follow from other logical statements. Truth tables gave a clear way to see the equivalences resulting from the logic. Deduction is not about meanings in language but about logic. Truth tables meant you could decide if something was true by looking at equivalences so ultimately tautologies. They showed that some statements were universally true just by inspection of the truth table. For computer scientists they gave a way to define what logical operations mean and then reason about digital circuits and programs they designed, both to help understand, so write them, and get them right.
Wittgenstein started off as an engineer interested in building flying machines, moved to become a mathematician, a soldier, a gardener and a teacher, as well as a hospital porter, but ultimately he is remembered as a great philosopher. Abstract though his philosophy was, along the way he provided computer scientists and electrical engineers useful tools that helped them build thinking machines.
MIT professor and transgender activist, Lynn Conwayalong with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!
A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.
This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.
Lynn Conway: Photo from wikimedia by Charles Rogers CC BY-SA 2.5
This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.
Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.
Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.
When you are watching a sport in person, a quick glance at the scoreboard should tell you everything you need to know about what’s going on. But why not try to put that information right in the action? How much better would it be if all the players’ shirts could display not just the score, but how well each individual is doing?
Light up, light up
An Australian research group from the University of Sydney has made it happen. They rigged up two basketball teams’ shirts with displays that showed instant information as they played one another. The players (and everyone else watching the game) could see information that usually stays hidden, like how many fouls and points each player had. The displays were simple coloured bands in different places around the shirt, all connected up with tiny wires sewn into the shirts like thread. For every point a player got, for example, one of the bands on the player’s waist would light up. Each foul a player got made a shoulder band light up. There was also a light on players’ backs reserved for the leading team. Take the lead and all your team’s lights turned on, but lose it again and they went dark with defeat.
Sweaty but safe
All those displays were controlled by an on-board computer that each player harnessed to his or her body. That computer, in turn, was wirelessly connected to a central computer that kept track of winners, losers, fouls and baskets. The designers had to be careful about certain things, though. In case a player fell over and crushed their computer, the units were designed with ‘weak spots’ on purpose so they would detach rather than crumple underneath the player. And, since no one wants to get electrocuted while playing their favourite sport, the designers protected all the gear against moisture and sweat.
Keeping your head in the game
In the end, it was the audience at the game who got the most out of the system. They were able to track the players more closely than they normally would, and it helped those in the crowd who didn’t know much about basketball to understand what was going on. The players themselves had less time to think about what was on everyone’s clothes, as they were busy playing the game, but the system did help them a few times. One player said that she could see when her teammate had a high score, “and it made me want to pass to her more, as she had a ‘hot hand'”. Another said that it was easier to tell when the clock was running down, so she knew when to play harder. Plus, just seeing points on their shirts gave the players more confidence. There’s so much information available to you when you watch a game on television that, in a weird way, actually being in the stadium could make you less informed. Maybe in the future, the fans in the stands will see everything the TV audience does as well, when the players wear all their statistics on their shirts! We’ll see what the sponsors think of that…
– the CS4FN team, Queen Mary University of London (From the archive)
Whilst using a code so that a message is unreadable is cryptography, hiding information like this so that no one knows there is a message to be read is called steganography
Serious model making is of course something that needs a steady hand, patience and a good eye…so useful practice for the basic skills for electronics too.
– Kok Ho HuenandPaul Curzon, Queen Mary University of London
In 1952 computer scientist and playful inventor, Marvin Minsky, designed a machine which did one thing, and one thing only. It switched itself off. It was just a box with a motor, switch and something to flip (toggle) the switch off again after someone turned it on. Science fiction writer Arthur C. Clarke thought there was something ‘unspeakably sinister’ about a machine that exists just to switch itself off and hobbyist makers continue to create their own variations today.