Alan Turing’s life

by Jonathan Black, Paul Curzon and Peter W. McOwan, Queen Mary University of London

From the archive

Alan Turing smiling

Alan Turing was born in London on 23 June 1912. His parents were both from successful, well-to-do families, which in the early part of the 20th century in England meant that his childhood was pretty stuffy. He didn’t see his parents much, wasn’t encouraged to be creative, and certainly wasn’t encouraged in his interest in science. But even early in his life, science was what he loved to do. He kept up his interest while he was away at boarding school, even though his teachers thought it was beneath well-bred students. When he was 16 he met a boy called Christopher Morcom who was also very interested in science. Christopher became Alan’s best friend, and probably his first big crush. When Christopher died suddenly a couple of years later, Alan partly helped deal with his grief with science, by studying whether the mind was made of matter, and where – if anywhere – the mind went when someone died.

The Turing machine

After he finished school, Alan went to the University of Cambridge to study mathematics, which brought him closer to questions about logic and calculation (and mind). After he graduated he stayed at Cambridge as a fellow, and started working on a problem that had been giving mathematicians headaches: whether it was possible to determine in advance if a particular mathematical proposition was provable. Alan solved it (the answer was no), but it was the way he solved it that helped change the world. He imagined a machine that could move symbols around on a paper tape to calculate answers. It would be like a mind, said Alan, only mechanical. You could give it a set of instructions to follow, the machine would move the symbols around and you would have your answer. This imaginary machine came to be called a Turing machine, and it forms the basis of how modern computers work.

Code-breaking at Bletchley Park

By the time the Second World War came round, Alan was a successful mathematician who’d spent time working with the greatest minds in his field. The British government needed mathematicians to help them crack the German codes so they could read their secret communiqués. Alan had been helping them on and off already, but when war broke out he moved to the British code-breaking headquarters at Bletchley Park to work full-time. Based on work by Polish mathematicians, he helped crack one of the Germans’ most baffling codes, called the Enigma, by designing a machine (based on earlier version by the Poles again!) that could help break Enigma messages as long as you could guess a small bit of the text (see box). With the help of British intelligence that guesswork was possible, so Alan and his team began regularly deciphering messages from ships and U-boats. As the war went on the codes got harder, but Alan and his colleagues at Bletchley designed even more impressive machines. They brought in telephone engineers to help marry Alan’s ideas about logic and statistics with electronic circuitry. That combination was about to produce the modern world.

Building a brain

The problem was that the engineers and code-breakers were still having to make a new machine for every job they wanted it to do. But Alan still had his idea for the Turing machine, which could do any calculation as long as you gave it different instructions. By the end of the war Alan was ready to have a go at building a Turing machine in real life. If it all went to plan, it would be the first modern electronic computer, but Alan thought of it as “building a brain”. Others were interested in building a brain, though, and soon there were teams elsewhere in the UK and the USA in the race too. Eventually a group in Manchester made Alan’s ideas a reality.

Troubled times

Not long after, he went to work at Manchester himself. He started thinking about new and different questions, like whether machines could be intelligent, and how plants and animals get their shape. But before he had much of a chance to explore these interests, Alan was arrested. In the 1950s, gay sex was illegal in the UK, and the police had discovered Alan’s relationship with a man. Alan didn’t hide his sexuality from his friends, and at his trial Alan never denied that he had relationships with men. He simply said that he didn’t see what was wrong with it. He was convicted, and forced to take hormone injections for a year as a form of chemical castration.

Although he had had a very rough period in his life, he kept living as well as possible, becoming closer to his friends, going on holiday and continuing his work in biology and physics. Then, in June 1954, his cleaner found him dead in his bed, with a half-eaten, cyanide-laced apple beside him.

Alan’s suicide was a tragic, unjust end to a life that made so much of the future possible.

More on …

Related Magazines …

cs4fn issue 14 cover

This blog is funded through EPSRC grant EP/W033615/1.

Cognitive crash dummies

by Paul Curzon, Queen Mary University of London

The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.

Don’t worry, the cognitive crash dummies are coming!

Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.

If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.

Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.

There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.

Send in the GOMS

Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.

To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.

GOMS in 60 seconds?

GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.

Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.

So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.


This article was originally published on the CS4FN website and appears on pages 12 and 13 of issue 9 (‘Programmed to save the world‘) of the CS4FN magazine, which you can download (free) here along with all of our other free material.

See also the concept of ‘digital twins’ in this article from our Christmas Advent Calendar: Pairs: mittens, gloves, pair programming, magic tricks.


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Chatbot or Cheatbot?

by Paul Curzon, Queen Mary University of London

Speech bubbles
Image by Clker-Free-Vector-Images from Pixabay
IImage by Clker-Free-Vector-Images from Pixabay 

The chatbots have suddenly got everyone talking, though about them as much as with them. Why? Because one, chatGPT has (amongst other things) reached the level of being able to fool us into thinking that it is a pretty good student.

It’s not exactly what Alan Turing was thinking about when he broached his idea of a test for intelligence for machines: if we cannot tell them apart from a human then we must accept they are intelligent. His test involved having a conversation with them over an extended period before making the decision, and that is subtly different to asking questions.

ChatGPT may be pretty close to passing an actual Turing Test but it probably still isn’t there yet. Ask the right questions and it behaves differently to a human. For example, ask it to prove that the square root of 2 is irrational and it can do it easily, and looks amazingly smart, – there are lots of versions of the proof out there that it has absorbed. It isn’t actually good at maths though. Ask it to simply count or add things and it can get it wrong. Essentially, it is just good at determining the right information from the vast store of information it has been trained on and then presenting it in a human-like way. It is arguably the way it can present it “in its own words” that makes it seem especially impressive.

Will we accept that it is “intelligent”? Once it was said that if a machine could beat humans at chess it would be intelligent. When one beat the best human, we just said “it’s not really intelligent – it can only play chess””. Perhaps chatGPT is just good at answering questions (amongst other things) but we won’t accept that as “intelligent” even if it is how we judge humans. What it can do is impressive and a step forward, though. Also, it is worth noting other AIs are better at some of the things it is weak at – logical thinking, counting, doing arithmetic, and so on. It likely won’t be long before the different AIs’ mistakes and weaknesses are ironed out and we have ones that can do it all.

Rather than asking whether it is intelligent, what has got everyone talking though (in universities and schools at least) is that chatGPT has shown that it can answer all sorts of questions we traditionally use for tests well enough to pass exams. The issue is that students can now use it instead of their own brains. The cry is out that we must abandon setting humans essays, we should no longer ask them to explain things, nor for that matter write (small) programs. These are all things chatGPT can now do well enough to pass such tests for any student unable to do them themselves. Others say we should be preparing students for the future so its ok, from now on, we just only test what human and chatGPT can do together.

It certainly means assessment needs to be rethought to some extent, and of course this is just the start: the chatbots are only going to get better, so we had better do the thinking fast. The situation is very like the advent of calculators, though. Yes, we need everyone to learn to use calculators. But calculators didn’t mean we had to stop learning how to do maths ourselves. Essay writing, explaining, writing simple programs, analytical skills, etc, just like arithmetic, are all about core skill development, building the skills to then build on. The fact that a chatbot can do it too doesn’t mean we should stop learning and practicing those skills (and assessing them as an inducement to learn as well as a check on whether the learning has been successful). So the question should not be about what we should stop doing, but more about how we make sure students do carry on learning. A big, bad thing about cheating (aside from unfairness) is that the person who decides to cheat loses the opportunity to learn. Chatbots should not stop humans learning either.

The biggest gain we can give a student is to teach them how to learn, so now we have to work out how to make sure they continue to learn in this new world, rather than just hand over all their learning tasks to the chatbot to do. As many people have pointed out, there are not just bad ways to use a chatbot, there are also ways we can use chatbots as teaching tools. Used well by an autonomous learner they can act as a personal tutor, explaining things they realise they don’t understand immediately, so becoming a basis for that student doing very effective deliberate learning, fixing understanding before moving on.

Of course, a bigger problem, if a chatbot can do things at least as well as we can then why would a company employ a person rather than just hire an AI? The AIs can now a lot of jobs we assumed were ours to do. It could be yet another way of technology focussing vast wealth on the few and taking from the many. Unless our intent is a distopian science fiction future where most humans have no role and no point, (see for example, CS Forester’s classic, The Machine Stops) then we still in any case ought to learn skills. If we are to keep ahead of the AIs and use them as a tool not be replaced by them, we need the basic skills to build on to gain the more advanced ones needed for the future. Learning skills is also, of course, a powerful way for humans (if not yet chatbots) to gain self-fulfilment and so happiness.

Right now, an issue is that the current generation of chatbots are still very capable of being wrong. chatGPT is like an over confident student. It will answer anything you ask, but it gives wrong answers just as confidently as right ones. Tell it it is wrong and it will give you a new answer just as confidently and possibly just as wrong. If people are to use it in place of thinking for themselves then, in the short term at least, they still need the skill it doesn’t have of judging when it is right or wrong.

So what should we do about assessment. Formal exams come back to the fore so that conditions are controlled. They make it clear you have to be able to do it yourself. Open book online tests that become popular in the pandemic, are unlikely to be fair assessments any more, but arguably they never were. Chatbots or not they were always too easy to cheat in. They may well be good still for learning. Perhaps in future if the chatbots are so clever then we could turn the Turing test around: we just ask an artificial intelligence to decide whether particular humans (our students) are “intelligent” or not…

Alternatively, if we don’t like the solutions being suggesting about the problems these new chatbots are raising, there is now another way forward. If they are so clever, we could just ask a chatbot to tell us what we should do about chatbots…

.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

Daphne Oram: the dawn of music humans can’t play

by Paul Curzon, Queen Mary University of London

Music notes over paint brush patterns
Image by Gerd Altmann from Pixabay

What links James Bond, a classic 1950s radio comedy series and a machine for creating music by drawing? … Electronic music pioneer: Daphne Oram.

Oram was one of the earliest musicians to experiment with electronic music, and was the first woman to create an electronic instrument. She realised that the advent of electronic music meant composers no longer had to worry about whether anyone could actual physically perform the music they composed. If you could write it down in a machine readable way then machines could play it electronically. That idea opened up whole new sounds and forms of music and is an idea that pop stars and music producers still make use of today.

She learnt to play music as a child and was good enough to be offered a place at the Royal College of Music, though turned it down. She also played with radio electronics with her brothers, creating radio gadgets and broadcasting music from one room to another. Combining music with electronics became her passion and she joined the BBC as a sound engineer. This was during World War 2 and her job included being the person ready during a live music broadcast to swap in a recording at just the right point if, for example, there was an air raid that meant the performance had to be abandoned. The show, after all, had to go on.

Composing electronic music

She went on to take this idea of combining an electronic recording with live performance further and composed a novel piece of music called Still Point that fully combined orchestral with electronic music in a completely novel way. The BBC turned down the idea of broadcasting it, however, so it was not played for 70 years until it was rediscovered after her death, ultimately being played at a BBC Prom.

Composers no longer had to worry
about whether anyone could actually
physically perform the music they composed

She started instead to compose electronic music and sounds for radio shows for the BBC which is where the comedy series link came in. She created sound effects for a sketch for the Goon Show (the show which made the names of comics including Spike Milligan and Peter Sellers). She constantly played with new techniques. Years later it became standard for pop musicians to mess with tapes of music to get interesting effects, speeding them up and down, rerecording fragments, creating loops, running tapes backwards, and so on. These kinds of effects were part of amazing sounds of the Beatles, for example. Oram was one of the first to experiment with these kinds of effects and use them in her compositions – long before pop star producers.

One of the most influential things she did was set up the BBC Radiophonic Workshop which went on to revolutionise the way sound effects and scores for films and shows were created. Oram though left the BBC shortly after it was founded, leaving the way open for other BBC pioneers like Delia Derbyshire. Oram felt she wasn’t getting credit for her work, and couldn’t push forward with some of her ideas. Instead Oram set herself up as an independent composer, creating effects for films and theatre. One of her contracts involved creating electronic music that was used on the soundtracks of the early Bond films starring Sean Connery – so Shirley Bassey is not the only woman to contribute to the Bond sound!

The Music Machine

While her film work brought in the money, she continued with her real passion which was to create a completely new and highly versatile way to create music…by drawing. She built a machine – the Oramics Machine – that read a composition drawn onto film reels. It fulfilled her idea of having a machine that could play anything she could compose (and fulfilled a thought she had as a child when she wondered how you could play the notes that fell between the keys on a piano!).

Image by unknown photographer from wikimedia.

The 35mm film that was the basis of her system that dates all the way back to the 19th century when George Eastman, Thomas Edison and Kennedy Dixon pioneered the invention film based photography and then movies. It involved a light sensitive layer being painted on strips of film with holes down the side that allowed the film to be advanced. This gave Oram a recording media. She could etch or paint subtle shapes and patterns on to the film. In a movie light was shone through the film, projecting the pictures on the film on to the screen. Oram instead used light sensors to detect the patterns on the film and convert it to electronic signals. Electronic circuitry she designed (and was awarded patents for) controlled cathode ray tubes that showed the original drawn patterns but now as electrical signals. Ultimately these electrical signals drove speakers. Key to the flexibility of the system was that different aspects of the music were controlled by patterns on different films. One for example controlled the frequency of the sound, others the timbre or tone quality and others the volume. These different control signals for the music were then combined by Oram’s circuitry. The result of combining the fine control of the drawings with the multiple tapes meant she had created a music machine far more flexible in the sound it could produce than any traditional instrument or orchestra. Modern music production facilities use very similar approaches today though based on software systems rather than the 1960s technology available to Oram.

Ultimately, Daphne Oram was ahead of her time as a result of combining her two childhood fascinations of music and electronics in a way that had not been done before. She may not be as famous as the great record producers who followed her, but they owe a lot to her ideas and innovation.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Kimberly Bryant, founder of Black Girls Code, born 14 January 1967

Kimberly Bryant from Black Girls Code pictured at the SXSW conference in 2016, https://en.wikipedia.org/wiki/Kimberly_Bryant_(technologist)
Kimberly Bryant in 2016, credit

Kimberly Bryant was born on 14 January 1967 in Memphis, Tennessee and was enthusiastic about maths and science in school, describing herself as a ‘nerdy girl’. She was awarded a scholarship to study Engineering at university but while there she switched to Electrical Engineering with Computer Science and Maths. During her career she has worked in several industries including pharmaceutical, biotechnology and energy.

She is most known though for founding Black Girls Code. In 2011 her daughter wanted to learn computer programming but nearly all the students on the nearest courses were boys and there were hardly any African American students enrolled. Kimberly didn’t want her daughter to feel isolated (as she herself had felt) so she created Black Girls Code (BGC) to provide after-school and summer school coding lessons for African American girls. BGC has a goal of teaching one million Black girls to code by 2040 and every year thousands of girls learn coding with their peers.

She has received recognition for her work and was given the Jefferson Award for Community Service for the support she offered to girls in her local community, and in 2013 Business Insider included her on its list of The 25 Most Influential African-Americans in Technology. When Barack Obama was the US President the White House website honoured her as one of its eleven Champions of Change in Tech Inclusion – Americans who are “doing extraordinary things to expand technology opportunities for young learners – especially minorities, women and girls, and others from communities historically underserved or underrepresented in tech fields.”

More on …


This blog is funded through EPSRC grant EP/W033615/1.

Bringing people closer when they’re far away

This article was written a few years ago, before the Covid pandemic led to many more of us keeping in touch from a distance…

by Paul Curzon, Queen Mary University of London

Photo shows two children playing with a tin-can telephone, which lets them talk to each other at a distance. Picture credit Jerry Loick KONZI and Wikipedia. Original photograph can be found here.

Living far away from the person you love is tough. You spend every day missing their presence. The Internet can help, and many couples in long-distance relationships use video chat to see more of each other. It’s not the same as being right there with someone else, but couples find ways to get as much connection as they can out of their video chats. Some researchers in Canada, at the University of Calgary and Simon Fraser University, interviewed couples in long-distance relationships to find out how they use video chat to stay connected.

Nice to see you

The first thing that the researchers found is perhaps what you might expect. Couples use video chat when it’s important to see each other. You can text little messages like ‘I love you’ to each other, or send longer stories in an email, and that’s fine. But seeing someone’s face when they’re talking to you feels much more emotionally close. One member of a couple said, “The voice is not enough. The relationship is so physical and visual. It’s not just about hearing and talking.” Others reported that seeing each other’s face helped them know what the other person was feeling. For one person, just seeing his partner’s face when she was feeling worn out helped him understand her state of mind. In other relationships, seeing one another helped avoid misunderstandings that come from trying to interpret tone of voice. Plus, having video helped couples show off new haircuts or clothes, or give each other tours of their surroundings.

Hanging out on video

The couples in the study didn’t use video chat just to have conversations. They also used it in a more casual way: to hang out with each other while they went about their lives. Their video connections might stay open for hours at a time while they did chores, worked, read, ate or played games. Long silences might pass. Couples might not even be visible to each other all the time. But each partner would, every once in a while, check back at the video screen to see what the other was up to. This kind of hanging out helped couples feel the presence of the other person, even if they weren’t having a conversation. One participant said of her partner, “At home, a lot of times at night, he likes to put on his PJs and turn out all the lights and sit there with a snack and, you know, watch TV… As long as you can see the form of somebody that’s a nice thing. I think it’s just the comfort of knowing that they’re there.”

Some couples felt connected by doing the same things together in different places. They shared evenings together in living rooms far away from each other, watching the same thing on television or even getting the same movie to watch and starting it at the same time. Some couples had dinner dates where they ordered the same kind of takeaway and ate it with each other through their video connection.

Designing to connect

This might not sound like research about human-computer interaction. It’s about the deepest kind of human interaction. But good computer design can help couples feel as connected as possible. The researchers also wanted to find out how they could help couples make their video chats better. Designers of the future might think about how to make gadgets that make video chat easier to do while getting on with other chores. It’s difficult to talk, film yourself, cook and move through the house all at the same time. What’s more, today’s gadgets aren’t really built to go everywhere in the house. Putting a laptop in a kitchen or propping one up in a bed doesn’t always work so well. The designers of operating systems need to work out how to do other stuff at the same time as video. If couples want to have a video chat connection open for hours, sometimes they might need to browse the web or write a text message at the same time. And what about couples who like to fall asleep next to one another? They might need night-vision cameras so they can see their partner without disturbing their sleep.

We’re probably going to have more long- distance relationships in the future. Easy, cheap travel makes it easier to move to faraway places. You can go to university abroad, and join a company with offices on every continent. It’s an awfully good thing that technology is making it easier to stay connected with the people who are important too. Video chat is not nearly as good as feeling your lover’s touch, but when you really miss someone, even watching them do chores helps.


This article was originally published on CS4FN and can also be found on pages 4 and 5 of CS4FN Issue 15, Does your computer understand you?, which you can download as a PDF. All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com/


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Hedy Lamarr: The movie star, the piano player and the torpedo

by Peter W McOwan, Queen Mary University of London

(from the archive)

Hedy Lamarr
eBay, Public domain, via Wikimedia Commons

Hedy Lamarr was a movie star. Back in the 1940’s, in Hollywood’s Golden Age, she was considered one of the screen’s most beautiful women and appeared in several blockbusters. But Hedy was more than just good looks and acting skills. Even though many people remembered Hedy for her pithy quote “Any girl can be glamorous. All she has to do is stand still and look stupid”, at the outbreak of World War 2 she and composer George Antheil invented an encryption technique for a torpedo radio guidance system!

Their creative idea for an encryption system was based on the mechanism behind the ‘player piano’ – an automatic piano where the tune is controlled by a roll of paper with punched holes. The idea was to use what is now known as ‘frequency hopping’ to overcome the possibility of the control signal being jammed by the enemy. Normal radio communication involves the sender picking a radio frequency and then sending all communication at that frequency. Anyone who tunes in to that frequency can then listen in, but also jam it by sending their own more powerful signal at the same frequency. That’s why non-digital radio stations constantly tell you their frequency “96.2 FM” or whatever. Frequency hopping involves jumping from frequency to frequency throughout the broadcast. Then, only if sender and receiver share the secret of exactly when the jumps will be made, and to what frequencies, can the receiver pick up the broadcast or jam it. That is essentially what the piano roll could do. It stored the secret.

Though the navy didn’t actually use the method during World War II, they did use the principles during the Cuban missile crisis in the 1960’s. The idea behind the method is also used in today’s GPS, Wi-Fi, Bluetooth and mobile phone technologies, underpinning so much of the technology of today. In 2014 she was inducted into the US national inventor’s hall of fame.


Click the player below or download the audio file (.m4a) to listen to this article read by Jo Brodie.


More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Gary Starkweather (b 9 Jan 1938) invented the laser printer and colour management

Gary Starkweather (9 January 1938 – 26 December 2019) invented and developed the first laser printer. In the late 1960s he was an engineer, with a background in optics, working in the US for the Xerox company (famous for their photocopiers) and came up with the idea of using a laser beam to transfer the image to the photocopier (so that it could make lots of copies), speeding up the process of printing documents.

Printer image by David Dunmore from Pixabay

You can hear what a modern laser printer sounds like by clicking on the link below…

…and there’s a video of him talking about the ‘Eureka moment’ of his invention here.

Laser printers are found in offices worldwide – you may even have one at home.

Colour wheel image by Pete Linforth from Pixabay

He also invented colour management which is a way of ensuring that a shade of blue colour on your computer’s or phone’s screen looks the same on a TV screen or when printed out. Different devices have different display colours so ‘red’ on one device might not be the same as ‘red’ on another. Colour management is something that happens in devices behind the scenes and which translates the colour instruction from one device to produce the closest match on another. There is an International Color Consortium (ICC) which helps different device manufacturers ensure that colour is “seamless between devices and documents”.

Starkweather also received an Academy Award (also known as an Oscar) for Technical Achievement in 1994, for the work he’d done in colour film scanning. That involves taking a strip of film and converting it digitally so it can be edited on a computer.

Also on this day, in 2007, the first Apple iPhone was announced (though not available until June that year)… and all iPhones use colour management!

Lynn Conway: revolutionising chip design

by Paul Curzon, Queen Mary University of London

Colourful line and dot abstract version of electronics
Image by Markus Christ from Pixabay
Image by Markus Christ from Pixabay 

MIT professor and transgender activist, Lynn Conway along with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!

A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.

This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.

Lynn Conway:
Photo from wikimedia by Charles Rogers CC BY-SA 2.5

This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.

Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.

Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Sorry to bug you: Grace Hopper

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(from the archive)

Close up head on of a flying butterfly or moth

In the 2003 film The Matrix Reloaded, Neo, Morpheus, Trinity and crew continue their battle with the machines that have enslaved the human race in the virtual reality of the Matrix. To find the Oracle, who can explain what’s going on (which, given the twisty plot in the Matrix films, is always a good idea), Trinity needs to break into a power station and switch off some power nodes so the others can enter the secret floor. The computer terminal displays that she is disabling 27 power nodes, numbers 21 to 48. Unfortunately, that’s actually 28 nodes, not 27! A computer that can’t count and shows the wrong message!

Sadly, there are far too many programs with mistakes in them. These mistakes are known as bugs because back in 1945 Grace Hopper, one of the female pioneers of computer science, found an error caused by a moth trapped between the points at Relay 70, Panel F, of the Mark II Aiken Relay Calculator being tested at Harvard University. She removed the moth, and attached it to her test logbook, writing ‘First actual case of bug being found’, and so popularised the term ‘debugging’ for testing and fixing a computer program.

Grace Hopper is famous for more than just the word ‘bug’ though. She was one of the most influential of the early computer pioneers, responsible for perhaps the most significant idea in helping programmers to write large, bug-free programs.

As a Lieutenant in the US Navy reserves, having volunteered after Pearl Harbor, Grace was one of three of the first programmers of Harvard’s IBM Mark I computer. It was the first fully automatic programmed computer.

She didn’t just program those early computers though, she came up with innovations in the way computers were programmed. The programs for those early computers all had to be made up of so-called ‘machine instructions’. These are the simplest operations the computer can do: such as to add two numbers, move data from a place in memory to a register (a place where arithmetic can be done in a subsequent operation), jump to a different instruction in the program, and so on.

Programming in such basic instructions is a bit like giving someone directions to the station but having to tell them exactly where to put their foot for every step. Grace’s idea was that you could write programs in a language closer to human language where each instruction in this high-level language stood for lots of the machine instructions – equivalent to giving the major turns in those directions rather than every step.

The ultimate result was COBOL: the first widely used high-level programming language. At a stroke her ideas made programming much easier to do and much less error-prone. Big programs were now a possibility.

For this idea of high-level languages to work though you needed a way to convert a program written in a high-level language like COBOL into those machine instructions that a computer can actually do. It can’t fill in the gaps on its own! Grace had the answer – the ‘compiler’. It is just another computer program, but one that does a specialist task: the conversion. Grace wrote the first ever compiler, for a language called A-O, as well as the first COBOL compiler. The business computing revolution was up and running.

High-level languages like COBOL have allowed far larger programs to be written than is possible in machine-code, and so ultimately the expansion of computers into every part of our lives. Of course even high-level programs can still contain mistakes, so programmers still need to spend most of their time testing and debugging. As the Oracle would no doubt say, “Check for moths, Trinity, check for moths”.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin.