Designing a planet’s road network

Image by Gerd Altmann from Pixabay

This article is inspired by the Kilburn Lecture given by Professor Steve Furber of the University of Manchester on 20th June 2008.

Can you imagine designing the world’s road network from scratch? Plus all the pavements, footpaths, bridges and shortcuts? Can you imagine designing a computer with the complexity of a planet?

In Douglas Adams’ classic The Hitchhiker’s Guide to the Galaxy, there’s a whole planet devoted to designing other planets, and the Earth was one of their creations. In the story, Earth isn’t just a planet: it’s also the most powerful and most complicated computer ever made, and its job was to help explain the answer to the meaning of life. Aliens had to design every last bit of it – one character, Slartibartfast, had the particularly complex job of designing the world’s coastlines. His favourite thing to make was fjords, because he liked the decorative look they gave to a country. (He even won an award for designing Norway.)

That’s just a story though, right? Could anyone ever design a computer of planetary complexity from scratch? As it happens that is exactly the task facing modern computer chip designers.

It is often said that modern chips are the most complex things humans have ever created, and if you imagine starting to design a whole planet’s road network, you will start to get the idea of what that means. The task is rather similar.

Essentially a computer chip is made up of millions of transistors: tiny elements that control how electrons flow round a circuit. A microscopic view of a chip like the one above looks very much like a road network with tracks connecting the transistors, which are a bit like junctions. Teams of chip designers have to design where the transistors go and how they are connected. The electrons flowing are a little like cars moving around the road network.

There’s an extra complication on a chip though. Designers of a road network only have to make sure people can get from A to B. In a computer, the changing voltages caused by the electrons as they move around is how data both gets from one part of the chip to another. Data also get switched around and transformed as calculations are performed at different points in the circuit. That means chip designers have to think about more than just connecting known places together. They have to make sure that as the electrons flow around, the data they represent still makes sense and computes the right answers. That’s how the whole thing is capable of doing something useful – like play music, give travel directions or control a computer game. It’s like designing a planetary road network, except all the traffic has to mean something in the end! Just like the fictional version of the Earth, only in fact.

It’s actually even harder for chip designers. Nowadays the connections they have to design are smaller than the wavelength of light. All that complexity has to fit, not on something as big as a planet, but crammed on a slab of silicon the size of your fingernail! Pretty impressive, but Earth’s intricate fjords are still more beautiful (especially the ones in Norway).

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

From a handful of sand to a fistful of dollars

Where computer chips come from

Sitting at the heart of your computer, mobile phone, smart TV (or even smart toaster) is the microprocessor that makes it all work. These electronic ‘chips’ have millions of tiny electronic circuits on them allowing the calculations needed to make your gizmos work. But it may be surprising to learn that these silicon chips, now a billion pound industry worldwide are in fact mostly made of the same stuff that you find on beaches, namely sand.

A transistor is just like a garden hose with your foot on it

Sand is mostly made of silicon dioxide, and silicon, the second most abundant substance in the earth’s crust, has useful chemical properties as well as being very cheap. You can easily ‘add’ other chemicals to silicon and change its electrical properties, and it’s by using these different forms of silicon that you can make mini switches, or transistors, in silicon chips.

House Hose

A transistor on a chip can be thought of like a garden hose, water flows from the tap (the source) through the hose and out onto the garden (the drain), but if you were to stand on the hose with your foot and block the water flow the watering would stop. An electronic transistor on a chip in its most basic form works like this, but electrical charge rather than water runs through the transistor (in fact the two parts of a transistor are actually called the source and drain). The ‘gate’ plays the part of your foot; this is the third part of the transistor. Applying a voltage to the gate is like putting your foot on and off the hose, it controls whether charge flows through the transistor.

Lots of letter T’s

A billion pound industry made of sand

If you look at a transistor on a chip it looks like a tiny letter T, the top crossbar on the T is the source/drain part (hose) and the upright part of the T is the gate (the foot part). Using these devices you can start to build up logic functions. For example, if you connect the source and drain of two transistors together one after another it can work out the logic AND function. How? Well think of this as a long hose with you and a friend’s foot available. If you stand on the hose no water will flow. If your friend stands on the hose no water will flow. If you both stand on the hose defiantly no water will flow. It is only when you don’t stand on the hose AND your friend also doesn’t stand on the hose that the water flows. So you’ve build a simple logical function.

Printing chips

From such simple logic functions you can build very complex computers, if you have enough of them, and that’s again where silicon comes in. You can ‘draw’ with silicon down to very small sizes. In fact a silicon chip is printed with many different layers. For example, one layer has the patterns for all the sources and drains, the next layer chemically printed on top are the gates, the next the metallic connections between the transistors and so on. These chips take millions of pounds to design and test, but once the patterns are correct it’s easy to stamp out millions of chips. It’s just a big chemical printing press. It’s the fact that you can produce silicon chips efficiently and cheaply with more and more transistors on them each year that drives the technology leaps we see today.

Beautiful silicon

Finally you might wonder how the chip companies protect their chip designs? They in fact protect them by registering the design of the masks they use in the layer printing process. Design registration is normally used to protect works of artistic merit, like company logos. Whether chip masks are quite as artistic doesn’t seem to matter. What does matter is that the chemical printing of silicon and lots of computer scientists have made all today’s computer technology possible. Now there is a beautiful thought to ponder when next on the beach.

– Paul Curzon, Queen Mary University of London

This article was first published on the original CS4FN website.


More on …

Magazines …

You probably won’t be surprised to learn that computer science can now also help improve the creation of computer chips. Computational lithography (literally ‘stone writing’) improves the resolution needed to etch the design of these tiny components onto the wafer thin silicon, using ultraviolet light (photoglithography = ‘stone writing with light’). Here’s a promotional video from ASML about computational lithography.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Marc Hannah and the graphics pipeline

Film and projectors
Image by Gerd Altmann from Pixabay

What do a Nintendo games console and the films Jurassic Park, Beauty and the Beast and Terminator II have in common? They all used Marc Hannah’s chips and linked programs for their amazing computer effects..It is important that we celebrate the work of Black Computer Scientists and Marc is one who deserves the plaudits as much as anyone as his work has had a massive effect on the leisure time of everyone who watches movies with special effects or plays video games – and that is just about all of us.

In the early 1980s, with six others, Marc founded Silicon Graphics, becoming its principal scientist. Silicon Graphics was a revolutionary company, pioneering fast computers capable of running the kind of graphics programs on special graphics chips that suddenly allowed the film industry to do amazing special effects. Those chips and linked programs were designed by Marc.

Now computers and games consoles have special graphics chips that do fast graphics processing as standard, but it is Marc and his fellow innovators at Silicon Graphics who originally made it happen.

It all started with his work with James Clark on a system called the Geometry Engine while they were at Stanford. Their idea was to create chips that do all the maths needed to do sophisticated manipulation of imagery. VLSI (Very Large scale Integration), whereby computers were getting smaller and fitting on a chip was revolutionising computer design. Suddenly a whole microprocessor could be put on a single chip because tens of thousands (now billions) of transistors could be put on a single slice of silicon. They pioneered the idea of using VLSI for creating 3-D computer imagery, rather than just general-purpose computers, and with Silicon Graphics they turned their ideas into an industrial reality that changed both film and games industries for ever.

Silicon Graphics was the first company to create a VLSI chip in this way, not to be a general-purpose computer, but just to manipulate 3-D computer images.

A simple 3D image in a computer might be implemented as the vertices (corners) of a series of polygons. To turn that into an image on a flat screen needs a series of mathematical manipulations of those points’ coordinates to find out where they end up in that flat image. What is in the image depends on the position of the viewer and where light is coming from, for example. If the object is solid you also need to work out what is in front, so seen, and what behind, so not. Each time the object, viewer or light source moves, the calculations need to be redone. It is done as a series of passes doing different geometric manipulations in what is called a geometry pipeline and it is these calculations they focussed on. They started by working out which computations had to be really fast: the ones in the inner most loops of the code that did this image processing, so was executed over and over again. This was the complex code that meant processing images took hours or days because it was doing lots of really complex calculation. Instead of trying to write faster code though, they instead created hardware, ie a VLSI chip, to do the job. Their geometry pipeline did the computation in a lightening fast way as it was avoiding all the overhead of executing programs and instead implementing the calculations that slowed things down directly in logic gates that did all that crucial maths very directly and so really quickly.

The result was that their graphic pipeline chips and programs that worked with them became the way that CGI (computer generated imagery) was done in films allowing realistic imagery, and were incorporated into games consoles too, allowing for ever more realistic looking games.

So if some amazing special effects make some monster appear totally realistic this Halloween, or you get lost in the world of a totally realistic computer game, thank Marc Hannah, as his graphics processing chips originally made it happen.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

The tale of the mote and the petrel

by Paul Curzon, Queen Mary University of London
(Updated from the archive)

Giant petrel flying over ice and rock
Image by Eduardo Ruiz from Pixabay
Image by Eduardo Ruiz from Pixabay 

Biology and computer science can meet in some unexpected, not to mention inhospitable, places. Who would have thought that the chemical soup in the nests of Petrels studied by field biologists might help in the development of futuristic dust-sized computers, for example?

Just Keep Doubling

One of the most successful predictions in Computer Science was made by Gordon Moore, co-founder of Intel. Back in 1965 he suggested that the number of transistors that can be squeezed onto an integrated circuit – the hardware computer processors are made of – doubled every few years: computers get ever more powerful and ever smaller. In the 60 or so years since Moore’s paper it has remained an amazingly accurate prediction. Will it continue to hold though or are we reaching some fundamental limit? Researchers at chip makers are confident that Moore’s Law can be relied on for the foreseeable future. The challenge will be met by the material scientists, the physicists and the chemists. Computer scientists must then be ready for the Law’s challenge too: delivering the software advances so that its trends are translated into changes in our everyday lives. It will lead to ever more complex systems on a single chip and so ever smaller computers that will truly disappear into the environment.

Dusting computers

Motes are one technology developed on the back of this trend. The aim is to create dust-sized computers. For example, the worlds smallest computer as of 2015 was the Michigan Micro Mote. It was only a few milimetres big but was a fully working computer system able to power itself, sense the world, process the data it collects and communicate data collected to other computers. In 2018 IBM announced a computer with sides a millimetre long. Rising to the challenge, the Michigan team soon announced their new mote with sides a third of a millimetre! The shrinking of motes will is not likely to stop!

Scatter motes around the environment and they form unobservable webs of intelligent sensors. Scatter them on a battlefield to detect troop movements or on or near roads to monitor traffic flow or pollution. Mix them in concrete and monitor the state of a bridge. Embed them in the home to support the elderly or in toys to interact with the kids. They are a technology that drives the idea of the Internet of Things where everyday objects become smart computers.

Battery technology has long been
the only big problem that remains.

What barriers must be overcome to make dust sized motes a ubiquitous reality? Much of the area of a computer is taken up by its connections to the outside world – all those pins allowing things to be plugged in. They can now be replaced by wireless communications. Computers contain multiple chips each housing separate processors. It is not the transistors that are the problem but the packaging – the chip casings are both bulky and expensive. Now we have “multicore” chips: large numbers of processors on a single small chip courtesy of Moore’s Law. This gives computer scientists significant challenges over how to develop software to run on such complicated hardware and use the resources well. Power can come from solar panels to allow them to constantly recharge even from indoor light. Even then, though, they still need batteries to store the energy. Battery technology is the only big problem that remains.

Enter the Petrels

But how do you test a device like that? Enter the Petrels. Intel’s approach is not to test futuristic technology on average users but to look for extreme ones who believe a technology will deliver them massive benefits. In the case of Motes, their early extreme users were field biologists who want to keep tabs on birds in extremely harsh field conditions. Not only is it physically difficult for humans to observe sea birds’ nests on inhospitable cliffs but human presence disturbs the birds. The solution: scatter motes in the nests to detect heat, humidity and the like from which the state and behaviour of the birds can be deduced. A nest is an extremely harsh environment for a computer though, both physically and chemically. A whole bunch of significant problems, overlooked by normal lab testing, must be overcome. The challenge of deploying Motes in such a harsh environment led to major improvements in the technology.


Moore’s Law is with us for a while yet, and with the efforts of material scientists, physicists, chemists, computer scientists and even field biologists and the sea birds they study it will continue to revolutionise our lives.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Ludwig Wittgenstein: tautology and truth tables

A jigsaw of the word truth with pieces missing
Image by Gerd Altmann from Pixabay
Image by Gerd Altmann from Pixabay

Ludwig Wittgenstein is one of the most important philosophers of the 20th century. His interest was in logic and truth, language, meaning and ethics. As an aside he made contributions to logical thinking that are a foundation of computing. He popularised truth tables, a way to evaluate logical expressions, and invented the modern idea of tautology. His life shows that you do not have to set out with your life planned out to ultimately do great things.

Wittgenstein was born in Austria, of three-quarters Jewish descent, and actually went to the same school as Hitler at the same time, as they were the same age to within a week. Had he still been in Austria at the time of World War II he would undoubtedly have been sent to a concentration camp. Hitler presumably would not have thought much of him had he known more about him at school. Not only did he have a Jewish background, he was bisexual: it is thought he fell in love four times, once with a woman and three times with men.

Interested, originally, in flying and so aeronautic engineering he studied how kites fly in the upper atmosphere for his PhD in Manchester: flying the kites in the Peak District. He moved on to the study of propellors and designed a very advanced propellor that included mini jet engines on the propellor blades themselves. Studying propellors led him to an interest in advanced mathematics and then ultimately to the foundations of mathematics – a course about which, years later, he taught at Cambridge University that Alan Turing attended. Turing was teaching a course with the same title but from a completely different point of view at the time. His interest in the foundations of maths led to him thinking about what facts are, how they relate to thoughts, language and logic and what truth really was. However, World War I then broke out. During the war he fought for the Austro-Hungarian army, originally safe behind the lines but at his own request he was sent to the Russian Front. He was ultimately awarded medals for bravery. While on military leave towards the end of the war he completed the philosophical work that made him famous, the Tractatus Logico-Philosophicus. After the war though he went to rural Austria and worked as a monastery gardener and then as a primary school teacher. His sister suggested this was “like using a precision instrument to open crates”, though as he got into trouble for being violent in his punishments of the children the metaphor probably isn’t very good as he doesn’t sound like a great teacher and as a teacher he was more like a very blunt instrument.

In his absence, his fame in academia grew, however, and so eventually he returned to Cambridge, finally gained a PhD and ultimately became a fellow and then a Professor of Philosophy. By the time World War II broke out he was teaching philosophy in Cambridge but felt this was the wrong thing to be doing during a war, so despite now being a world famous philosopher went to work as a porter in Guy’s hospital, London.

His philosophical work was ground breaking mainly because of his arguments about language and meaning with respect to truth. However, a small part of has work has a very concrete relevance to computing. His thinking about truth and logic had led him to introduce the really important idea of a tautology as a redundant statement in logic. The ancient Greeks used the word but in a completely different sense of something made “true” just because it was said more than once, so argued to be true in a rhetorical sense. In computational terms Wittgenstein’s idea of a tautology is a logical statement about propositions that can be simplified to true. Propositions are just basic statements that may or may not be true, such as “The moon is made of cheese”. An example of a tautology is (a OR NOT(a)) where (a) is a variable that stands for a proposition so something that is either true or false. Putting in the concrete propositions “The moon is made of cheese” we get:

“(The moon is made of cheese) OR NOT (The moon is made of cheese)”

or in other words the statement

“The moon is made of cheese OR The moon is NOT made of cheese”

Logically, this is always true, whatever the moon is made of. “The moon is made of cheese” can be either true or false. Either it is made of cheese or not but either way the whole statement is true whatever the truth of the moon as one side or other of the OR is bound to be true. The statement is equivalent to just saying

“TRUE”

In other words, the original statement always simplifies to truth. More than that, whatever proposition you substitute in place of the statement “The moon is made of cheese” it still simplifies to true eg if we use the statement instead “Snoopy fought the Red Baron” then we get

“Snoopy fought the Red Baron OR NOT (Snoopy fought the Red Baron)”

Again, whatever the truth about Snoopy, this is a true statement. It is true whatever statement we substitute for (a) and whether it is true or false: (a OR NOT(a)) is a tautology guaranteed to be true by its logical structure, not by the meaning of the words of the propositions substituted in for a.

As part of this work Wittgenstein used truth tables, and is often claimed to have invented them. He certainly popularised them as a result of his work becoming so famous. However, Charles Sanders Peirce used truth tables first, 30 years earlier. The latter was a philosopher too, know as the “Father of Pragmatism” (so hopefully that means he wouldn’t have minded Wittgenstein getting all the credit!)

A truth table is just a table that includes as rows all the combinations of true and false values of the variables in logical expressions together with an answer for those values. For example a truth table for the operator NOT, so telling us in all situations what (NOT a) means, is:

aNOT a
TRUEFALSE
FALSETRUE
A truth table for the NOT operator. Reading along the rows,
IF a is TRUE then (NOT a) is FALSE; IF a is FALSE then (NOT a) is TRUE. Image by CS4FN

The first thing that is important about truth tables is that they give very clear and simple meaning (or “semantics”) to logical operators (like AND, OR and NOT) and so of statements asserting facts logically. Computationally, they make precise what the logical operators do, as the above table for NOT does. This of course matters a lot in programs where logical operators control what the program does. It also matters in hardware which is built up from circuits representing the logical operations. They provide the basis for understanding what both programs and hardware do.

The following is the truth table for the logical OR operator: again the last column gives the meaning of the operator so the answer of computing the logical or operation. This time there are two variables (a) and (b) so four rows to cover the combinations.

aba OR b
TRUETRUETRUE
TRUEFALSETRUE
FALSETRUETRUE
FALSEFALSEFALSE
A truth table for the logical OR operator. Reading along the rows,
IF a is TRUE and b is TRUE then (a OR b) is TRUE;
IF a is TRUE and b is FALSE then (a OR b) is TRUE;
IF a is FALSE and b is TRUE then (a OR b) is TRUE;
IF a is FALSE and b is FASLE then (a OR b) is FALSE;
Image by CS4FN

Truth tables can be used to give more than just meaning to operators, they can be used for doing logical reasoning; to compute new truth tables for more complex logical expressions, including checking if they are tautologies. This is the basis of program verification (mathematically proving a program does the right thing) and similarly hardware verification. Let us look at (a OR (NOT a)). We make a column for (a) and then a second column gives the answer for (NOT a) from the NOT truth table. Adding a third column we then look up in the OR truth table the answers given the values for (a) and (NOT a) on each row. For example, if a is TRUE then NOT a is FALSE. Looking up the row for TRUE/FALSE in the OR table we see the answer is TRUE so that goes in the answer column for (a OR (NOT a)). The full table is then:

aNOT aa OR (NOT a)
TRUEFALSETRUE
FALSETRUETRUE
A truth table for the a OR NOT a. Reading along the rows,
IF a is TRUE then (a OR (NOT a)) is TRUE;
IF a is FALSE then (a OR (NOT a)) is TRUE;
Image by CS4FN

Truth tables therefore give us an easy way to see if a logical expression is a tautology. If the answer column has TRUE as the answer for every row, as here, then the expression is a tautology. Whatever the truth of the starting fact a, the expression is always true. It has the same truth table as the expression TRUE (a) where TRUE is an operator which gives answer true whatever its operand.

aTRUE
TRUETRUE
FALSETRUE
A truth table for the TRUE operator. Whatever its operand it gives answer TRUE.
Image by CS4FN

We can do a similar thing for (a AND (NOT a)). We need the truth table for AND to do this.

aba AND b
TRUETRUETRUE
TRUEFALSEFALSE
FALSETRUEFALSE
FALSEFALSEFALSE
A truth table for the logical AND operator.
Image by CS4FN

We fill in the answer column based on the values from the (a) column and the (NOT a) column looking up the answer in the truth table for AND.

aNOT aa AND (NOT a)
TRUEFALSEFALSE
FALSETRUEFALSE
A truth table for the a AND (NOT a). Reading along the rows,
IF a is TRUE then a AND (NOT a) is FALSE;
IF a is FALSE then a AND (NOT a) is FALSE;
Image by CS4FN

This shows that it is not a tautology as not all rows have answer TRUE. In fact, we can see from the table that this actually simplifies to FALSE. It can never be true whatever the facts involved as both (a) and (NOT a) are never true about any proposition (a) at the same time.

Here is a slightly more complicated logical expression to consider: ((a AND b) IMPLIES a). Is this a tautology? We need the truth table for IMPLIES to work this out:

aba IMPLIES b
TRUETRUETRUE
TRUEFALSEFALSE
FALSETRUETRUE
FALSEFALSETRUE
A truth table for the logical IMPLIES logical operator.
Image by CS4FN

When we look up the values from the (a AND b) column and the (a) column in the IMPLIES truth table, we get the answers for the full expression ((a AND b) IMPLIES a) and find that it is a tautology as the answer is always true:

aba AND ba(a AND b) IMPLIES a
TRUETRUETRUETRUETRUE
TRUEFALSEFALSETRUETRUE
FALSETRUEFALSEFALSETRUE
FALSEFALSEFALSEFALSETRUE
A truth table for the logical expression (a AND b) IMPLIES a.
Image by CS4FN

Using the same kind of approach we can use truth tables to check if two expressions are equivalent. If they give the same final column of answers for the same inputs then they are interchangeable. Let’s look at (b OR (NOT a)).

abNOT a(b OR (NOT a))
TRUETRUEFALSETRUE
TRUEFALSEFALSEFALSE
FALSETRUETRUETRUE
FALSEFALSETRUETRUE
A Truth table for the logical expression (b OR (NOT a)).
Image by CS4FN

This gives exactly the same answers in the final column as the truth table for IMPLIES above, so we have just shown that:

(a IMPLIES b) IS EQUIVALENT TO (b OR (NOT a))

We have proved a theorem about logical implication. (a IMPLIES b) has the same meaning as, so is interchangeable with, (b OR (NOT a)). All tautologies are interchangeable of course as they are all equivalent in their answers to TRUE. If we give a truth table for IS EQUIVALENT TO we could even show equivalences like the above are tautologies!

Tautologies, and equivalences, once proved, can also be the basis of further reasoning. Any time we have in a logical expression (a IMPLES b), for example, we can swap it for (b OR (NOT a)) knowing they are equivalent.

Truth tables helped Wittgenstein think about arguments and deduction of facts using rules. In particular, he decided special rules that other philosophers suggested should be used in deduction, were not necessary, as such. Deduction instead works simply from the structure of logic that means logical statements follow from other logical statements. Truth tables gave a clear way to see the equivalences resulting from the logic. Deduction is not about meanings in language but about logic. Truth tables meant you could decide if something was true by looking at equivalences so ultimately tautologies. They showed that some statements were universally true just by inspection of the truth table. For computer scientists they gave a way to define what logical operations mean and then reason about digital circuits and programs they designed, both to help understand, so write them, and get them right.

Wittgenstein started off as an engineer interested in building flying machines, moved to become a mathematician, a soldier, a gardener and a teacher, as well as a hospital porter, but ultimately he is remembered as a great philosopher. Abstract though his philosophy was, along the way he provided computer scientists and electrical engineers useful tools that helped them build thinking machines.

– Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn Issue 14 ccover

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC also supported this blog post through research grant EP/K040251/2 held by Professor Ursula Martin. 

Lynn Conway: revolutionising chip design

by Paul Curzon, Queen Mary University of London

Colourful line and dot abstract version of electronics
Image by Markus Christ from Pixabay
Image by Markus Christ from Pixabay 

MIT professor and transgender activist, Lynn Conway along with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!

A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.

This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.

Lynn Conway:
Photo from wikimedia by Charles Rogers CC BY-SA 2.5

This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.

Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.

Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Sophie Wilson: Where would feeding cows take you?

Chip design that changed the world

(Updated from the archive)

cows grazing
Image by Christian B. from Pixabay 

Some people’s innovations are so amazing it is hard to know where to start. Sophie Wilson is like that. She helped kick start the original 80’s BBC micro computer craze, then went on to help design the chips in virtually every smartphone ever made. Her more recent innovations are the backbone that is keeping broadband infrastructure going. The amount of money her innovations have made easily runs into tens of billions of dollars, and the companies she helped succeed make hundreds of billions of dollars. It all started with her feeding cows!

While still a student Sophie spent a summer designing a system that could automatically feed cows. It was powered by a microcomputer called the MOS 6502: one of the first really cheap chips. As a result Sophie gained experience in both programming using the 6502’s set of instructions but also embedded computers: the idea that computers can disappear into other everyday objects. After university she quickly got a job as a lead designer at Acorn Computers and extended their version of the BASIC language, adding, for example, a way to name procedures so that it was easier to write large programs by breaking them up into smaller, manageable parts.

Acorn needed a new version of their microcomputer, based on the 6502 processor, to bid for a contract with the BBC for a project to inspire people about the fun of coding. Her boss challenged her to design it and get it working, all in only a week. He also told her someone else in the team had already said they could do it. Taking up the challenge she built the hardware in a few days, soldering while watching the Royal Wedding of Charles and Diana on TV. With a day to go there were still bugs in the software, so she worked through the night debugging. She succeeded and within the week of her taking up the challenge it worked! As a result Acorn won a contract from the BBC, the BBC micro was born and a whole generation were subsequently inspired to code. Many computer scientists, still remember the BBC micro fondly 30 years later.

That would be an amazing lifetime achievement for anyone. Sophie went on to even greater things. Acorn morphed into the company ARM on the back of more of her innovations. Ultimately this was about returning to the idea of embedded computers. The Acorn team realised that embedded computers were the future. As ARM they have done more than anyone to make embedded computing a ubiquitous reality. They set about designing a new chip based on the idea of Reduced Instruction Set Computing (RISC). The trend up to that point was to add ever more complex instructions to the set of programming instructions that computer architectures supported. The result was bloated systems that were hungry for power. The idea behind RISC chips was to do the opposite and design a chip with a small but powerful instruction set. Sophie’s colleague Steve Furber set to work designing the chip’s architecture – essentially the hardware. Sophie herself designed the instructions it had to support – its lowest level programming language. The problem was to come up with the right set of instructions so that each could be executed really, really quickly – getting as much work done in as few clock cycles as possible. Those instructions also had to be versatile enough so that when sequenced together they could do more complicated things quickly too. Other teams from big companies had been struggling to do this well despite all their clout, money and powerful computer mainframes to work on the problem. Sophie did it in her head. She wrote a simulator for it in her BBC BASIC running on the BBC Micro. The resulting architecture and its descendants took over the world, with ARM’s RISC chips running 95% of all smartphones. If you have a smartphone you are probably using an ARM chip. They are also used in game controllers and tablets, drones, televisions, smart cars and homes, smartwatches and fitness trackers. All these applications, and embedded computers generally, need chips that combine speed with low energy needs. That is what RISC delivered allowing the revolution to start.

If you want to thank anyone for your personal mobile devices, not to mention the way our cars, homes, streets and work are now full of helpful gadgets, start by thanking Sophie…and she’s not finished yet!

– Paul Curzon, Queen Mary University of London


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

QMUL CS4FN EPSRC logos