Lynn Conway: revolutionising chip design

Lynn Conway photo
Lynn Conway:
Photo from wikimedia by Charles Rogers CC BY-SA 2.5

MIT professor and transgender activist, Lynn Conway along with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!

A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.

This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.

This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.

Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.

Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.

Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

The first computer music

Robot with horn
Robot Image by www_slon_pics from Pixabay

The first recorded music by a computer program was the result of a flamboyant flourish added on the end of a program that played draughts in the early 1950s. It played God Save the King.

The first computers were developed towards the end of the second world war to do the number crunching needed to break the German codes. After the War several groups set about manufacturing computers around the world: including three in the UK. This was still a time when computers filled whole rooms and it was widely believed that a whole country would only need a few. The uses envisioned tended to be to do lots of number crunching.

A small group of people could see that they could be much more fun than that, with one being school teacher Christopher Strachey. When he was introduced to the Pilot ACE computer on a visit to the National Physical Laboratories, in his spare time he set about writing a program that could play against humans at draughts. Unfortunately, the computer didn’t have enough memory for his program.

He knew Alan Turing, one of those war time pioneers, when they were both at university before the War. He luckily heard that Turing, now working at the University of Manchester, was working on the new Feranti Mark I computer which would have more memory, so wrote to him to see if he could get to play with it. Turing invited him to visit and on the second visit, having had a chance to write a version of the program for the new machine, he was given the chance to try to get his draughts program to work on the Mark I. He was left to get on with it that evening.

He astonished everyone the next morning by having the program working and ready to demonstrate. He had worked through the night to debug it. Not only that, as it finished running, to everyone’s surprise, the computer played the National Anthem, God Save the King. As Frank Cooper, one of those there at the time said: “We were all agog to know how this had been done.” Strachey’s reputation as one of the first wizard programmers was sealed.

The reason it was possible to play sounds on the computer at all, was nothing to do with music. A special command called ‘Hoot’ had been included in the set of instructions programmers could use (called the ‘order’ code at the time) when programming the Mark I computer. The computer was connected to a loud speaker and Hoot was used to signal things like the end of the program – alerting the operators. Apparently it hadn’t occurred to anyone there but Strachey that it was everything you needed to create the first computer music.

He also programmed it to play Baa Baa Black Sheep and went on to write a more general program that would allow any tune to be played. When a BBC Live Broadcast Unit visited the University in 1951 to see the computer for Children’s Hour the Mark I gave the first ever broadcast performance of computer music, playing Strachey’s music: the UK National Anthem, Baa Baa Black Sheep and also In the Mood.

While this was the first recorded computer music it is likely that Strachey was beaten to creating the first actual programmed computer music by a team in Australia who had similar ideas and did a similar thing probably slightly earlier. They used the equivalent hoot on the CSIRAC computer developed there by Trevor Pearcey and programmed by Geoff Hill. Both teams were years ahead of anyone else and it was a long time before anyone took the idea of computer music seriously.

Strachey went on to be a leading figure in the design of programming languages, responsible for many of the key advances that have led to programmers being able to write the vast and complex programs of today.

The recording made of the performance has recently been rediscovered and restored so you can now listen to the performance yourself (see below).

Paul Curzon, Queen Mary University of London

(updated from the archive)


Listen …

More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Christopher Strachey and the secret of being a Wizard Debugger

(from the archive)

Code with BUG cross hairs over one area and rest faded out
Image by Pexels from Pixabay (edited)

Elite computer programmers are often called wizards, and one of the first wizards was Christopher Strachey, who went on to be a pioneer of the development of programming languages. The first program he wrote was an Artificial Intelligence program to play draughts: more complicated (and fun) than the programs others were writing at the time. He was not only renowned as a programmer, but also as being amazingly good at debugging – getting them actually to work. On a visit to Alan Turing in Manchester he was given the chance to get his programs working on the Ferranti Mark I computer there. He did so very quickly working through the night to get them working, and even making one play God Save the King on the hooter. He immediately gained a reputation as being a “perfect” programmer. So what was his secret?

No-one writes complex code right first time, and programming teams spend more time testing programs than writing them in the first place to try and find all the bugs – possibly obscure situations where the program doesn’t do what the person wanted. A big part of the skill of programming is to be able to think logically and so be able to work through what the program actually does do, not just what it should do.

So what was Strachey’s secret that made him so good at debugging? When someone came to him with code that didn’t work, but they couldn’t work out why, he would start by asking them to explain how the program worked to him. As they talked, he would sit back, close his eyes and think about something completely different: a Beethoven symphony perhaps. Was this some secret way to tap his own creativity? Well no. What would happen is as the person explained the program to him they would invariable stop at some point and say something like: “Oh. Wait a minute…” and realise their mistake. By getting them to explain he was making them work through in detail how the program worked. Strachey’s reputation would be enhanced yet again.

There is a lesson here for anyone wishing to be a good programmer. Spending time explaining your program is also a good way to find problems. It is also an important part of learning to program, and ultimately becoming a wizard.

– Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Sophie Wilson: Where would feeding cows take you?

Chip design that changed the world

(Updated from the archive)

cows grazing
Image by Christian B. from Pixabay 

Some people’s innovations are so amazing it is hard to know where to start. Sophie Wilson is like that. She helped kick start the original 80’s BBC micro computer craze, then went on to help design the chips in virtually every smartphone ever made. Her more recent innovations are the backbone that is keeping broadband infrastructure going. The amount of money her innovations have made easily runs into tens of billions of dollars, and the companies she helped succeed make hundreds of billions of dollars. It all started with her feeding cows!

While still a student Sophie spent a summer designing a system that could automatically feed cows. It was powered by a microcomputer called the MOS 6502: one of the first really cheap chips. As a result Sophie gained experience in both programming using the 6502’s set of instructions but also embedded computers: the idea that computers can disappear into other everyday objects. After university she quickly got a job as a lead designer at Acorn Computers and extended their version of the BASIC language, adding, for example, a way to name procedures so that it was easier to write large programs by breaking them up into smaller, manageable parts.

Acorn needed a new version of their microcomputer, based on the 6502 processor, to bid for a contract with the BBC for a project to inspire people about the fun of coding. Her boss challenged her to design it and get it working, all in only a week. He also told her someone else in the team had already said they could do it. Taking up the challenge she built the hardware in a few days, soldering while watching the Royal Wedding of Charles and Diana on TV. With a day to go there were still bugs in the software, so she worked through the night debugging. She succeeded and within the week of her taking up the challenge it worked! As a result Acorn won a contract from the BBC, the BBC micro was born and a whole generation were subsequently inspired to code. Many computer scientists, still remember the BBC micro fondly 30 years later.

That would be an amazing lifetime achievement for anyone. Sophie went on to even greater things. Acorn morphed into the company ARM on the back of more of her innovations. Ultimately this was about returning to the idea of embedded computers. The Acorn team realised that embedded computers were the future. As ARM they have done more than anyone to make embedded computing a ubiquitous reality. They set about designing a new chip based on the idea of Reduced Instruction Set Computing (RISC). The trend up to that point was to add ever more complex instructions to the set of programming instructions that computer architectures supported. The result was bloated systems that were hungry for power. The idea behind RISC chips was to do the opposite and design a chip with a small but powerful instruction set. Sophie’s colleague Steve Furber set to work designing the chip’s architecture – essentially the hardware. Sophie herself designed the instructions it had to support – its lowest level programming language. The problem was to come up with the right set of instructions so that each could be executed really, really quickly – getting as much work done in as few clock cycles as possible. Those instructions also had to be versatile enough so that when sequenced together they could do more complicated things quickly too. Other teams from big companies had been struggling to do this well despite all their clout, money and powerful computer mainframes to work on the problem. Sophie did it in her head. She wrote a simulator for it in her BBC BASIC running on the BBC Micro. The resulting architecture and its descendants took over the world, with ARM’s RISC chips running 95% of all smartphones. If you have a smartphone you are probably using an ARM chip. They are also used in game controllers and tablets, drones, televisions, smart cars and homes, smartwatches and fitness trackers. All these applications, and embedded computers generally, need chips that combine speed with low energy needs. That is what RISC delivered allowing the revolution to start.

If you want to thank anyone for your personal mobile devices, not to mention the way our cars, homes, streets and work are now full of helpful gadgets, start by thanking Sophie…and she’s not finished yet!

– Paul Curzon, Queen Mary University of London


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Diamond Dogs: Bowie’s algorithmic creativity

Bowie black and white portrait
Image by Cristian Ferronato from Pixabay

Rock star David Bowie co-wrote a program that generated lyric ideas. It gave him inspiration for some of his most famous songs. It generated sentences at random based on something called the ‘cut-up’ technique: an algorithm for writing lyrics that he was already doing by hand. You take sentences from completely different places, cut them into bits and combine them in new ways. The randomness in the algorithm creates strange combinations of ideas and he would use ones that caught his attention, sometimes building whole songs around the ideas they expressed.

Tools for creativity

Rather than being an algorithm that is creative in itself, it is perhaps more a tool to help people (or perhaps other algorithms) be more creative. Both kinds of algorithm are of course useful. It does help highlight an issue with any “creative algorithm”, whether creating new art, music or writing. If the algorithm produces lots of output and a human then chooses the ones to keep (and show others), then where is the creativity? In the algorithm or in the person? That selection process of knowing what to keep and what to discard (or keep working on) seems to be a key part of creativity. Any truly creative program should therefore include a module to do such vetting of its work!

All that, aside, an algorithm is certainly part of the reason Bowie’s song lyrics were often so surreal and intriguing!


Write a cut-up technique program

Why not try and write your own cut-up technique program to produce lyrics. You will likely need to use String processing libraries of whatever language you choose. You could feed it things like the text of webpages or news reports. If you don’t program yet, do it by hand cutting up magazines, shuffling the part sentences before gluing them back together.

– Paul Curzon, Queen Mary University of London (Updated from the archive)


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

QMUL CS4FN EPSRC logos

The algorithm that could not speak its name

Two hearts interlocked as jigsaw pieces
Image by PIRO4D from Pixabay (cropped)

The first program that was actually creative was probably written by Christopher Strachey, in 1952. It wrote love letters…possibly gay ones.

The letters themselves weren’t particularly special. They wouldn’t make your heart skip a beat if they were written to you, though they are kind of quaint. They actually have the feel of someone learning English doing their best but struggling with the right words! It’s the way the algorithm works that was special. It would be simple to write a program that ‘wrote’ love letters thought up and then pre-programmed by the programmer. Strachey’s program could do much more than that though – it could write letters he never envisaged. It did this using a few simple rules that despite their simplicity gave it the power to write a vast number of different letters. It was based on lists of different kinds of words chosen to be suitable for love letters. There was a list of nouns (like ‘affection’, ‘ardour’, …), a list of adjectives (like ‘tender’, ‘loving’, …), and so on.

It then just chose words from the appropriate list at random and plugged them into place in template sentences, a bit like slotting the last pieces into a jigsaw. It only used a few kinds of sentences as its basic rules such as: “You are my < adjective > < noun >”. That rule could generate, for example, “You are my tender affection.” or “You are my loving heart”, substituting in different combinations of its adjectives and nouns. It then combined several similar rules about different kinds of sentences to give a different love letter every time.

Strachey knew Alan Turing, who was a key figure in the creation of the first computers, and they may have worked on the ideas behind the program together. As both were gay it is entirely possible that the program was actually written to generate gay love letters. Oddly, the one word the program never uses is the word ‘love’ – a sentiment that at the time gay people just could not openly express. It was a love letter algorithm that just could not speak its name!

You can try out Strachey’s program [EXTERNAL] and the Twitter Bot loveletter_txt is based on it [EXTERNAL] Better still why not write your own version. It’s not too hard.

Here is one of the offerings from my attempt to write a love letter writing program:

Beloved Little Cabbage,

I cling to your anxious fervour. I want to hold you forever. You are my fondest craving. You are my fondest enthusiasm. My affection lovingly yearns for your loveable passion.

Yours, keenly Q

The template I used was:

salutation1 + ” ” + salutation2 + “,”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“You are my ” + noun + “.”

“I want ” + verb + ” you forever.”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“My ” + noun1 + ” ” + adverb + ” ” + verb  + ” your ” + adjective + ” ” + noun2 + “.”

“Yours, ” + adverb + ” Q”

Here characters in double quotes stay the same, whereas those that are not in quotes are variables: place holders for a word from the word associated word list.

Experiment with different templates and different word lists and create your own unique version. If you can’t program yet, you can do it on paper by writing out the template and putting the different word lists on different coloured small post-it notes. Number them and use dice to choose one at random.

Of course, you don’t have to write love poems. Perhaps, you could use the same idea for a post card writing program this summer holiday…

– Paul Curzon, Queen Mary University of London (Updated from the archive)


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.