Designing for autistic people

by Daniel Gill and Paul Curzon, Queen Mary University of London

What should you be thinking about when designing for a specific group with specific needs, such as autistic people? Queen Mary students were set this task and on the whole did well. The lessons though are useful when designing any technology, whether apps or gadgets.

A futuristic but complicated interface
A futuristic but complicated interface with lots of features: feature bloat?
Image by Tung Lam from Pixabay

The Interactive Systems Design module at QMUL includes a term-long realistic team interaction design project with the teaching team acting as clients. The topic changes each year but is always open-ended and aimed at helping some specific group of people. The idea is to give experience designing for a clear user group not just for anyone. A key requirement is always that the design, above all, must be very easy to use, without help. It should be intuitively obvious how to use it. At the end of the module, each team pitches their design in a short presentation as well as a client report.

This year the aim was to create something to support autistic people. What their design does, and how, was left to the teams to decide from their early research and prototyping. They had to identify a need themselves. As a consequence, the teams came up with a wide range of applications and tools to support autistic people in very different ways.

How do you come up with an idea for a design? It should be based on research. The teams had to follow a specific (if simplified) process. The first step was to find out as much as they could about the user group and other stakeholders being designed for: here autistic people and, if appropriate, their carers. The key thing is to identify their unmet goals and needs. There are lots of ways to do this: from book research (charities, for example, often provide good background information) and informally talking to people from the stakeholder group, to more rigorous methods of formal interviews, focus groups and even ethnography (where you embed yourself in a community).

Many of the QMUL teams came up with designs that clearly supported autistic people, but some projects were only quite loosely linked with autism. While the needs of autistic people were considered in the concept and design, they did not fully focus on supporting autistic people. More feedback directly from autistic people, both at the start and throughout the process, would have likely made the applications much more suitable. (That of course is quite hard in this kind of student role-playing scenario, though some groups were able to do so.) That though is key idea the module is aiming to teach – how important it is to involve users and their concerns closely throughout the design process, both in coming up with designs and evaluating them. Old fashioned waterfall models from software engineering, where designs are only tested with users at the end, are just not good enough.

From the research, the teams were then required to create design personas. These are detailed, realistic but fictional people with names, families, and lives. The more realistic the character the better (computer scientists need to be good at fiction too!) Personas are intended to represent the people being designed for in a concrete and tangible way throughout the design process. They help to ensure the designers do design for real people not some abstract tangible person that shape shifts to the needs of their ideas. Doing the latter can lead to concepts being pushed forward just because the designer is excited by their ideas rather than because they are actually useful. Throughout the design the team refer back to them – does this idea work for Mo and the things he is trying to do? 

An important part of good persona design lies around stereotypes. The QMUL groups avoided stereotypes of autistic people. One group went further, though: they included the positive traits that their autistic persona had, not just negative ones. They didn’t see their users in a simplistic way. Thinking about positive attributes is really, really important if designing for neurodivergent people, but also for those with physical disabilities too, to help make them a realistic person. That group’s persona was therefore outstanding. Alan Cooper, who came up with the idea of design personas, argued that stereotypes (such as a nurse persona being female) were good in that they could give people a quick and solid idea of the person. However, this is a very debatable view. It seems to go against the whole idea of personas. Most likely you miss the richness of real people and end up designing for a fictional person that doesn’t represent that group of people at all. The aim of personas is to help the designers see the world from the perspective of their users, so here of autistic people. A stereotype can only diminish that.

Multicolour jigsaw ribbon
Image by Oberholster Venita from Pixabay

Another core lesson of the module is the importance of avoiding feature bloat. Lots of software and gadgets are far harder to use than need be because they are packed with features: features that are hardly ever, possibly never, used. What could have been simple to use apps, focusing on some key tasks, instead are turned into ‘do everything’ apps. A really good video call app instead becomes a file store, a messaging place, chat rooms, a phone booth, a calendar, a movie player, and more. Suddenly it’s much harder to make video calls. Because there are so many features and so many modes all needing their own controls the important things the design was supposed to help you do become hard to do (think of a TV remote control – the more features the more buttons until important ones are lost). That undermines the aim that good design should make key tasks intuitively easy. The difficulty when designing such systems is balancing the desire to put as many helpful features as possible into a single application, and the complexity that this adds. That can be bad for neurotypical people, who may find it hard to use. For neurodivergent people it can be much worse – they can find themselves overwhelmed. When presented with such a system, if they can use it at all, they might have to develop their own strategies to overcome the information overload caused. For example, they might need to learn the interface bit-by-bit. For something being designed specifically for neurodiverse people, that should never happen. Some of the applications of the QMUL teams were too complicated like this. This seems to be one of the hardest things for designers to learn, as adding ideas, adding features seems to be a good thing, it is certainly vitally important not to make this mistake if designing for autistic people. 

Perhaps one of the most important points that arose from the designs was that many of the applications presented were designed to help autistic people change to fit into the world. While this would certainly be beneficial, it is important to realise that such systems are only necessary because the world is generally not welcoming for autistic people. It is much better if technology is designed to change the world instead. 

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Neurodiversity and what it takes to be a good programmer

by Paul Curzon, Queen Mary University of London

People often suggest neurodiverse people make good computer scientists. For example, one of the most famous autistic people, Temple Grandin, an academic at Colorado State University and animal welfre expert, has suggested programming is one of the jobs autistic people are potentially naturally good at (along with other computer science linked jobs) and that “Half the people in Silicon Valley probably have autism.”.. So what makes a good computer scientist? And why might people suggest neurodiverse people are good at it?

A multicoloured jigsaw pattern ribbon
Image by Oberholster Venita from Pixabay

What makes a good programmer? Is it knowledge, skills or is it the type of person you are? It is actually all three though it’s important to realise that all three can be improved. No one is born a computer scientist. You may not have the knowledge, the skills or be the right kind of person now, but you can improve them all.

To be a good programmer, you need to develop specialist knowledge such as knowing what the available language constructs are, knowing what those constructs do, knowing when to use them over others, and so on. You also need to develop particular technical skills like an ability to decompose problems into sub-problems, to formulate solutions in a particular restricted notation (the programming language), to generalise solutions, and so on. However, you also need to become the right kind of person to do well as a programmer. 

Thinking about what kind of person makes a good programmer, to help my students work on them and so become better programers, I made a list of the attributes I associate with good student programmers. My list includes: attention to detail, an ability to think clearly and logically, being creative, having good spatial visualising skills, being a hard worker, being resilient to things going wrong so determined, being organised, being able to meet deadlines, enjoying problem solving, being good at pattern matching, thinking analytically and being open to learning new and different ways of doing things.

More recently, when taking part in a workshop about neurodiversity I was struck by a similar list we were given. Part of the idea behind ‘neurodiversity’ is that everyone is different and everyone has strengths and weaknesses. If you think of ‘disability’ you tend to think of apparent weaknesses. Those ‘weaknesses’ are often there because the world we have created has turned them into weaknesses. For example, being in a wheelchair makes it hard to travel because we have built a world full of steps, kerbs and cobbles, doors that are hard to manipulate, high counters and so on. If we were to remove all those obstacles, a wheelchair would not have to reduce your ability to get around. Thinking about neurodiversity, the suggestion is to think about the strengths that come with it too, not just the difficulties you might encounter because of the way we’ve made the world.

The list of strengths of neurodiverse people given at the workshop were: attention to detail, focussed interest, problem-solving, creative, visualising, pattern recognition. Looking further you find both those positives reinforced and new positives. For example, one support website gives the positives of being an autistic person as: attention to detail, deep focus, observation skills, ability to absorb and retain facts, visual skills, expertise, a methodological approach, taking novel approaches, creativity, tenacity and resilience, accepting of difference and integrity. Thinking logically is also often picked out as a trait that neurodiverse people are often good at. The similarity of these lists to my list of what kind of person my students should aim to turn themselves into is very clear. Autistic people can start with a very solid basis to build on. If my list is right, then their personal positives may help neurodiverse people to quickly become good programmers.

Here are a few of those positives others have picked out that neurodiverse people may have and how they relate to programming:

Attention to detail: This is important in programming because a program is all about detail, both in the syntax (not missing brackets or semicolons) but more importantly not missing situations that could occur so cases the program must cover. A program must deal with every possibility that might arise, not just some. The way it deals with them also matters in the detail. Poor programs might just announce a mistake was made and shut down. A good program will explain the mistake and give the user a way to correct it for example. Detail like that matters. Attention to detail is also important in debugging as bugs are just details gone wrong. 

Resilience and determination: Programming is like being on an emotional roller coaster. Getting a program right is full of highs and lows. You think it is working and then the last test you run shows a deep flaw. Back to the drawing board. As a novice learning it is even worse. The learning curve is steep as programming is a complex skill. That means there are lots of lows and seemingly insurmountable highs. At the start it can seem impossible to get a program to even compile never mind run. You have to keep going. You have to be determined. You have to be resilient to take all the knocks.

Focussed interest. Writing a program takes time and you have to focus. Stop and come back later and it will be so much harder to continue and to avoid making mistakes. Decomposition is a way to break the overall task into smaller subtasks, so methods to code, and that helps, once you have the skill. However, even then being able to maintain your focus to finish each method, so each subtask, makes the programming job much easier.

Pattern recognition: Human expertise in anything ultimately comes down to pattern matching new situations against old. It is the way our brains work. Expert chess players pattern match situations to tell them what to do, and so do firefighters in a burning building. So do expert programmers. Initially programming is about learning the meaning of programming constructs and how to use them, problem solving every step of the way. That is why the learning curve is so steep. As you gain experience though it becomes more about pattern matching: realising what a particular program needs at this point and how it is similar to something you have seen before then matching it to one of many standard template solutions. Then you just whip out the template and adapt it to fit. Spot something is essentially a search task and you whip out a search algorithm to solve it. Need to process a 2 dimensional array – you just need the rectangular for loop solution. Once you can do that kind of pattern matching, programming becomes much, much simpler.

Creativity and doing things in novel ways: Writing a program is an act of creation, so just like arts and crafts involves creativity. Just writing a program is one kind of creativity, coming up with an idea for a program or spotting a need no one else has noticed so you can write a program that fills that need requires great creativity of a slightly different kind. So does coming up with a novel solution once you have a novel problem. Developing new algorithms is all about thinking up a novel way of solving a problem and that of course takes creativity. Designing interfaces that are aesthetically pleasing but make a task easier to do takes creativity. If you can think about a problem in a different way to everyone else, then likely you will come up with different solutions no one else thought of.

Problem solving and analytical minds: Programming is problem solving on steroids. Being able to think analytically is an important part of problem solving and is especially powerful if combined with creativity (see above). You need to be able to analyse a problem, come up with creative solutions and be able to analyse what is the best way of solving it from those creative solutions. Being analytical helps with solid testing too.

Visual thinking: research suggests those with good visual, spatial thinking skills make good programmers. The reasons are not clear, but good programs are all about clear structure, so it may be that the ability to easily see the structure of programs and manipulate them in your head is part of it. That is part of the idea of block-based programming languages like Scratch and why they are used as a way into programming for young children. The structure of the program is made visual. Some paradigms of programming are also naturally more visual. In particular object-oriented programming sees programs as objects that send messages to each other and that is something that can naturally be visualised. As programs become bigger that ability to still visualise the separate parts and how they work as a whole  is a big advantage.

A methodological approach: Novice programmers just tinker and hack programs together. Expert programmers design them. Many people never seem to get beyond the hacking stage, struggling with the idea of following a method to design first, yet it is vital if you are to engineer serious programs that work. That doesn’t mean that programming is just following methods, tinkering can be part of the problem solving and coming up with creative ideas, but it should be used within rigorous methodology not instead of it. More time is also spent by good programming teams testing programs as writing them, and that takes rigorous methods to do well too. Software engineering is all about following rigorous methods precisely because it is the only way to develop real programs that work and are fit for purpose. Vast amounts of software written is never used because it is useless. Rigorous methods give a way to avoid many of the problems.

Logical thinking: Being able to think clearly and logically is core to programming. It combines some of the things above like attention to detail, and thinking clearly and methodologically. Writing programs is essentially the practice of applied logic as logic underpins the semantics (ie meaning) of programming languages and so programs. You have to think logically when writing programs, when testing them and when debugging them. Computers are machines built from logic and you need to think in part like a computer to get it to do what you want. A key part of good programming is doing code walkthroughs – as a team stepping through what a program does line by line. That requires clear logical thinking to think in teh steps the computer performs.

I could go on, but will leave it there. The positives that neurodiverse people might have are very strongly positives for becoming a good programmer. That is why some of the best students I’ve had the privilege to teach have been neurodiverse students.

Different people, neurodiverse or otherwise, will start with different positives, different weaknesses. People start in different places for different reasons, some ahead, some behind. I liked doing puzzles as a child, so spent my childhood devouring logic and algorithmic puzzles. That meant when I first tried to learn to program, I found it fairly easy. I had built important skills and knowledge and had become a good logical thinker and problem solver just for fun. I learnt to program for fun. That meant it was if I’d started way beyond the starting line in the race to become a programmer. Many neurodiverse people do the same, if for different reasons.

Other skills I’ve needed as a computer scientist I have had to work hard on and develop strategies to overcome. I am a shy introvert. However, I need to both network and give presentations as a computer scientist (and ultimately now I give weekly lectures to hundreds at a time as an academic computer scientist). For that I had to practice, learn theory about good presentation and perhaps most importantly, given how paralysing my shyness was, devise a strategy to overcome that natural weakness. I did find a strategy – I developed an act. I have a fake extrovert persona that I act out. I act being that other person in these situations where I can’t otherwise cope, so it is not me you see giving presentations and lectures but my fake persona. Weaknesses can be overcome, even if they mean you start far behind the starting line. Of course, some weaknesses and ways we’ve built the world mean we may not be able to overcome the problems, and not everyone wants to be a computer scientist anyway so has the desire to. What matters is finding the future that matches your positives and interests, and where you can overcome the weaknesses where you set your mind to it.

Programming is not about born talent though (nothing is). We all have strengths and weaknesses and we can all become better by practicing and finding strategies that work for us, building upon our strengths and working on our weaknesses, especially when we have the help of a great teacher (or have help to change the way the world works so the weaknesses vanish).

My list above gives some of the key personal characteristics you need to work on improving (however good at them you are or are not right now) If you do want to be a good programmer. Anyone can become better at programming than they are if they have that desire. What matters is that you want to learn, are willing to put the practice in, can develop strategies to overcome your initial weaknesses, and you don’t give up. Neurodiverse people often have a head start on those personal attributes for becoming good at new things too.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

The top 10 bugs

by Paul Curzon, Queen Mary University of London

(updated from the archive)

Bugs are everywhere, but why not learn from the mistakes of others. Here are some common bugs with examples of how they led to it all going terribly wrong.

The bugs of others show how solid testing, code walkthroughs, formal reasoning and other methods for preventing bugs really matter. In the examples below the consequences were massive, but none of these bugs should have made it to final systems, whatever the consequences.

Here then is my personal countdown of the top 10 bugs to learn from.

BUG 10: Divide by Zero

USS Yorktown (from wikipedia)

The USS Yorktown was used as a testbed for a new generation of “smart” ship. On 21 September 1997, its propulsion system failed leaving it “dead in the water” for 3 hours. It tried to divide by zero after a crew member input a 0 where no 0 should have been, crashing every computer on the ship’s network.

Moral: Input validation matters a lot and there are some checks you should know should be done as standard.

BUG 9: Arithmetic Overflow

Boeing 787 Dreamliner
From Wikipedia Author
pjs2005 from Hampshire, UK
From Wikipedia Author pjs2005 from Hampshire, UK CC BY-SA 2.0

Keep adding to an integer variable and you run out of bits. Suddenly you have a small number not the bigger one expected. This was one bug in the Therac-25 radiation therapy machine that killed patients. The Boeing 787 Dreamliner had the same problem. Fly for more than 248 days and it would switch off.

Moral: Always have checks for overflow and underflow, and if it might matter then you need something better than a fixed but-length number.

BUG 8: Timing Problems

Three telephone handsets on pavement.
Image by Alexa from Pixabay

AT&T lost $60 million the day the phones died (all of them). It was a result of changing a few lines of working code. Things happened too fast for the program. The telephone switches reset but were told they needed to reset again before they’d finished, … and so on.

Moral: even small code changes need thorough testing…and timing matters but needs extra special care.

BUG 6.99999989: Wrong numbers in a lookup table

Intel Pentium chip
Konstantin Lanzet – CPU Collection Konstantin Lanzet CC BY-SA 3.0

Intel’s Pentium chip turned out not to be able to divide properly. It was due to a wrong entry in a lookup table. Intel set aside $475 million to cover replacing the flawed processors. Some chips were turned in to key rings.

Moral: Data needs to be thoroughly checked not just instructions.

BUG 6: Wrong units

Artist's impression of the Mars Climate orbiter from wikipedia

The Mars Climate Orbiter spent 10 months getting to Mars …where it promptly disintegrated. It passed too close to the planet’s atmosphere. The programmers assumed numbers were in pound-force seconds when they were actually in newton-seconds.

Moral: Clear documentation (including comments) really does matter.

BUG 5: Non-terminating loop

The spinning pizza of death is common. Your computer claims to be working hard, and puts up a progress symbol like a spinning wheel…forever. There are lots of ways that this happens. The simple version is that the program has entered a loop in a way that means the test to continue is never false. This took on a greater spin in the Fujitsu-UK Post Office Horizon system where bugs triggered the biggest ever miscarriage of justice. Hundreds of post masters were accused of stealing money because Horizon said they had. One of many bugs was that mid-transaction, the Horizon terminal could freeze. However, while in this infinite loop hitting any key duplicated the transaction, subtracting money again for every key press, not just the once for the stamp being bought.

Moral: Clear loop structure matters – where loops only exit via of one clear (and reasoned correct) condition.

BUG 4: Storing a big number in a small space

Arianespace’s Ariane 5 rocket Photo Credit: (NASA/Chris Gunn) from wikipedia. Creative Commons Attribution 2.0 Generic

The first Ariane 5 rocket exploded at a cost of $500 million 40 seconds after lift-off. Despite $7 billion spent on the rocket, the program stored a 64 bit floating point number in to a variable that could only hold a 16 bit integer.

Moral: Strong type systems are there to help not hinder. Use languages without them at your peril.

BUG 3: Memory Leak

Memory leaks (forgetting to free up space when done with) are responsible for many computer problems. The Firefox browser had one. It was infamous because Firefox (implausibly) claimed their program had no memory leaks.

Moral: If your language doesn’t use a garbage collector then you need to be extra careful about memory management.

BUG 2: Null pointer errors

 Photograph by Rama, Wikimedia Commons, Cc-by-sa-2.0-fr

Tony Hoare who invented the null pointer (a pointer that points nowhere) called it his “billion-dollar” mistake because programmers struggle to cope with it. Null pointer bugs crash computers, give hackers ways in and generally cause chaos.

Moral: Avoid null pointers and do not rely on just remembering to check for them.

BUG 1: Buffer overflow

The Morris Worm, an early Internet worm, came close to shutting down the Internet. It used a buffer overflow bug in network software to move from computer to computer shutting them down. Data was stored in a buffer but store too much and it would just be placed in the next avaialbale location in memory so could overwrite the program with new code.

Moral: Array-like data structures need extra care. Always triple check bounds are correct and that the code ensures overflows cannot happen

BUG 0: Buffer overflow

Lift buttons 0, 1 2, 3
Image by Coombesy from Pixabay

Arrays in many languages start from position 0. This means the last position is one less than the length of the array. Get it wrong… as every novice (and expert) programmer does at some point … and you run off the end. Oddly (in Europe), we have no problem in a lift pressing 1 to go to the second floor up. In other situations, count from 0 and you do one too many things. Did I say 10 bugs…OOPS!

Moral: Think hard about every loop counter in your program.

The number 1 moral from all of this is that thorough testing matters a lot. Just trying a program a few times is not enough. In fact thorough testing is not enough either. You also need code walkthroughs, clear logical thinking about your code, formal reasoning tools where they exist, strong type systems, good commenting, and more. Most of all programmers need to understand their code will NOT be correct and they must put lots of effort into finding bugs if crucial ones are not to be missed. knowing the kinds of mistakes everyone makes, and being extra vigilant about them, is a good start.

And that is the end of my top 10 bugs…until the next arrogant, over-confident programmer causes the next catastrophe.

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Front cover of CS4FN issue 17 - Machines Making Medicine Safer

More on …


EPSRC supports this blog through research grant EP/W033615/1,

A visit to the Turing Machine: a short story

by Greg Michaelson

Greg Michaelson is an Emeritus professor of computer science at Heriot-Watt University in Edinburgh. He is also a novelist and a short story writer.

From the cs4fn archive.


Burning City
Image by JL G from Pixabay

“Come on!” called Alice, taking the coat off the peg. “We’re going to be late!”

“Do I have to?” said Henry, emerging from the front room.

“Yes,” said Alice, handing him the coat. “Of course you have to go. Here. Put this on.”

“But we’re playing,” said Henry, wrestling with the sleeves.

“Too bad,” said Alice, straightening the jacket and zipping it up. “It’ll still be there when we get back.”

“Not if someone knocks it over,” said Henry, picking up a small model dinosaur from the hall table. “Like last time. Why can’t we have electric games like you did?”

“Electronic games,” said Alice, doing up her buttons. “Not electric. No one has them anymore. You know that.”

“Were they really digital?” asked Henry, fiddling with the dinosaur.

“Yes,” said Alice, putting on her hat. “Of course they were digital.”

“But the telephone’s all right,” said Henry.

“Yes,” said Alice, checking her makeup in the mirror. “It’s analogue.”

“And radio. And record players. And tape recorders. And television,” said Henry.

“They’re all analogue now,” said Alice, putting the compact back into her handbag. “Anything analogue’s fine. Just not digital. Stop wasting time! We’ll be late.”

“Why does it matter if we’re late?” asked Henry, walking the dinosaur up and down the hall table.

“They’ll notice,” said Alice. “We don’t want to get another warning. Put that away. Come on.”

“Why don’t the others have to go?” asked Henry, palming the dinosaur.

“They went last Sunday,” said Alice, opening the front door. “You said you didn’t want to go. We agreed I’d take you today instead.”

“Och, granny, it’s so boring…” said Henry.

They left the house and walked briskly to the end of the street. Then they crossed the deserted park, following the central path towards the squat neo-classical stone building on the far side.

“Get a move on!” said Alice, quickening the pace. “We really are going to be late.”

————————-

Henry really hadn’t paid enough attention at school. He knew that Turing Machines were named for Alan Turing, the first Martyr of the Digital Age. And he knew that a Turing Machine could work out sums, a bit like a school child doing arithmetic. Only instead of a pad of paper and a pencil, a Turing Machine used a tape of cells. And instead of rows of numbers and pluses and minuses on a page, a Turing Machine could only put one letter on each cell, though it could change a letter without having to actually rub it out. And instead of moving between different places on a piece of paper whenever it wanted to, and maybe doodling in between the sums, a Turing Machine could only move the tape left and right one cell at a time. But just like a school child getting another pad from the teacher when they ran out of paper, the Turing Machine could somehow add another empty cell whenever it got to the end of the tape.

————————-

When they reached the building, they mounted the stone staircase and entered the antechamber through the central pillars. Just inside the doorway, Alice gave their identity cards to the uniformed guard.

“I see you’re a regular,” she said approvingly to Alice, checking the ledger. “But you’re not,” sternly to Henry.

Henry stared at his shoes.

“Don’t leave it so long, next time,” said the guard, handing the cards back to Alice. “In you go. They’re about to start. Try not to make too much noise.”

Hand in hand, Alice and Henry walked down the broad corridor towards the central shrine. On either side, glass cases housed electronic equipment. Computers. Printers. Scanners. Mobile phones. Games consoles. Laptops. Flat screen displays.

The corridor walls were lined with black and white photographs. Each picture showed a scene of destitution from the Digital Age.

Shirt sleeved stock brokers slumped in front of screens of plunging share prices. Homeless home owners queued outside a state bank soup kitchen. Sunken eyed organic farmers huddled beside mounds of rotting vegetables. Bulldozers shovelled data farms into land fill. Lines of well armed police faced poorly armed protestors. Bodies in bags lay piled along the walls of the crematorium. Children scavenged for toner cartridges amongst shattered office blocks.

Alice looked straight ahead: the photographs bore terrible memories. Henry dawdled, gazing longingly into the display cases: Gameboy. Playstation. X Box…

“Come on!” said Alice, sotto voce, tugging Henry away from the displays.

At the end of the corridor, they let themselves into the shrine. The hall was full. The hall was quiet.

————————-

Henry was actually quite good at sums, and he knew he could do them because he had rules in his head for adding and subtracting, because he’d learnt his tables. The Turing Machine didn’t have a head at all, but it did have rules which told it what to do next. Groups of rules that did similar things were called states, so all the rules for adding were kept separately from all the rules for subtracting. Every step of a Turing machine sum involved finding a rule in the state it was working on to match the letter on the tape cell it was currently looking at. That rule would tell the Machine how to change the symbol on the tape, which way to move the tape, and maybe to change state to a different set of rules.

————————-

On the dais, lowered the Turing Machine, huge coils of tape links disappearing into the dark wells on either side, the vast frame of the state transition engine filling the rear wall. In front of the Turing Machine, the Minister of State stood at the podium.

“Come in! Come in!” he beamed at Alice and Henry. “There’s lots of space at the front. Don’t be shy.”

Red faced, Alice hurried Henry down the aisle. At the very front of the congregation, they sat down cross legged on the floor beneath the podium.

“My friends,” began the Minister of State. “Welcome. Welcome indeed! Today is a special day. Today, the Machine will change state. But first, let us be silent together. Please rise.”

The Minister of State bowed his head as the congregation shuffled to its feet.

———————–

According to Henry’s teacher, there was a different Turing Machine for every possible sum in the world. The hard bit was working out the rules. That was called programming, but, since the end of the Digital Age, programming was against the law. Unless you were a Minister of State.

————————

“Dear friends,” intoned the Minister of State, after a suitable pause. “We have lived through terrible times. Times when Turing’s vision of equality between human and machine intelligences was perverted by base greed. Times when humans sought to bend intelligent machines to their selfish wills for personal gain. Times when, instead of making useful things that would benefit everybody, humans invented and sold more and more rarefied abstractions from things: shares, bonds, equities, futures, derivatives, options…”

————————

The Turing Machine on the dais was made from wood and brass. It was extremely plain, though highly polished. The tape was like a giant bicycle chain, with holes in the centre of each link. The Machine could plug a peg into a hole to represent a one or pull a peg out to represent a zero. Henry knew that any information could be represented by zeros and ones, but it took an awful lot of them compared with letters.

————————-

“… Soon there were more abstractions than things, and all the wealth embodied in the few things that the people in poor countries still made was stolen away, to feed the abstractions made by the people in the rich countries. None of this would have been possible without computers…”

————————-

The state transition unit that held the rules was extremely complicated. Each rule was a pattern of pegs, laid out in rows on a great big board. A row of spring mounted wooden fingers moved up and down the pegs. When they felt the rule for the symbol on the tape cell link, they could trigger the movement of a peg in or out of the link, and then release the brakes to start up one revolution of the enormous cog wheels that would shift the tape one cell left or right.

A stone looking like a scared face
Image by Dean Moriarty from Pixabay

————————-

“…With all the computers in the world linked together by the Internet, humans no longer had to think about how to manage things, about how best to use them for the greatest good. Instead, programs that nobody understood anymore made lightening decisions, moving abstractions from low profits to high profits, turning the low profits into losses on the way, never caring how many human lives were ruined…”

————————-

The Turing Machine was powered by a big brass and wooden handle connected to a gear train. The handle needed lots of turns to find and apply the next rule. At the end of the ceremony, the Minister of State would always invite a member of the congregation to come and help him turn the handle. Henry always hoped he’d be chosen.

——————————

“…Turing himself thought that computers would be a force for untold good; that, guided by reason, computers could accomplish anything humans could accomplish. But before his vision could be fully realised, he was persecuted and poisoned by a callous state interested only in secrets and profits. After his death, the computer he helped design was called the Pilot Ace; just as the pilot guides the ship, so the Pilot Ace might have been the best guide for a true Digital Age…”

——————————

Nobody was very sure where all the cells were stored when the Machine wasn’t inspecting them. Nobody was very sure how new cells were added to the ends of the tape. It all happened deep under the dais. Some people actually thought that the tape was infinite, but Henry knew that wasn’t possible as there wasn’t enough wood and brass to make it that long.

——————————

“…But almost sixty years after Turing’s needless death, his beloved universal machines had bankrupted the nations of the world one by one, reducing their peoples to a lowest common denominator of abject misery. Of course, the few people that benefited from the trade in abstractions tried to make sure that they weren’t affected but eventually even they succumbed…”

——————————

Nobody seemed to know what the Turing Machine on the dais was actually computing. Well, the Minister of State must have known. And Turing had never expected anyone to actually build a real Turing Machine with real moving parts. Turing’s machine was a thought experiment for exploring what could and couldn’t be done by following rules to process sequences of symbols.

——————————

“…For a while, everything stopped. There were power shortages. There were food shortages. There were medical shortages. People rioted. Cities burned. Panicking defence forces used lethal force to suppress the very people they were supposed to protect. And then, slowly, people remembered that it was possible to live without abstractions, by each making things that other people wanted, by making best use of available resources for the common good…”

——————————

The Turing Machine on the dais was itself a symbol of human folly, an object lesson in futility, a salutary reminder that embodying something in symbols didn’t make it real.

——————————

“…My friends, let us not forget the dreadful events we have witnessed. Let us not forget all the good people who have perished so needlessly. Let us not forget the abject folly of abstraction. Let the Turing Machine move one step closer along the path of its unknown computation. Let the Machine change its state, just as we have had to change ours. Please rise.”

The congregation got to their feet and looked expectantly at the Minister of State. The Minister of State slowly inspected the congregation. Finally, his eyes fixed on Henry, fidgeting directly in front of him.

“Young man,” he beamed at Henry. “Come. Join me at the handle. Together we shall show that Machine that we are all its masters.”

Henry looked round at his grandmother.

“Go on,” she mouthed. “Go on.”

Henry walked round to the right end of the dais. As he mounted the wooden stairs, he noticed a second staircase leading down behind the Machine into the bowels of the dais.

“Just here,” said the Minister of State, leading Henry round behind the handle, so they were both facing the congregation. “Take a good grip…”

Henry was still clasping the plastic dinosaur in his right hand. He put the dinosaur on the nearest link of the chain and placed both hands on the worn wooden shaft.

And turn it steadily…”

Henry leant into the handle, which, much to his surprise, moved freely, sweeping the wooden fingers across the pegs of rules on the state transition panel. As the fingers settled on a row of pegs, a brass prod descended from directly above the chain, forcing the wooden peg out of its retaining hole in the central link. Finally, the chain slowly began to shift from left to right, across the front of the Machine, towards Henry and the Minister of State. As the chain moved, the plastic dinosaur toppled over and tumbled down the tape well.

“Oh no!” cried Henry, letting go of the handle. Utterly nonplussed, the Minister of State stood and stared as Henry peered into the shaft, rushed to the back of the Machine and hurried down the stairs into the gloom.

A faint blue glow came from the far side of the space under the dais. Henry cautiously approached the glow, which seemed to come from a small rectangular source, partly obscured by someone in front of it.

“Please,” said Henry. “Have you seen my dinosaur?”

“Hang on!” said a female voice.

The woman stood up and lit a candle. Looking round, Henry could now see that the space was festooned with wires, leading into electric motors driving belts connected to the Turing Machine. The space was implausibly small. There was no room for a finite tape of any length at all, let alone an infinite one.

“Where are all the tape cells?” asked Henry, puzzled.

“We only need two spare ones,” said the woman. “When the tape moves, we stick a new cell on one end and take the cell off the other.”

“So what’s the blue light?” asked Henry.

“That’s a computer,” said the woman. “It keeps track of what’s on the tape and controls the Turing Machine.”

“A real digital computer!” said Henry in wonder. “Does it play games?”

“Oh yes!” said the woman, turning off the monitor as the Minister of State came down the stairs. “What do you think I was doing when you showed up? But don’t tell anyone. Now, let’s find that dinosaur.”


Related Magazines …

cs4fn issue 14 cover

More on …


EPSRC supports this blog through research grant EP/W033615/1,

Lego Computer Science: Turing Machines Part 3: the program

by Paul Curzon, Queen Mary University of London

We have so far built the hardware of a Lego Turing Machine. Next we need the crucial part: software. It needs a program to tell it what to do.

Our Turing Machine so far has an Infinite Tape, a Tape Head and a Controller. The Tape holds data values taken from a given set of 4×4 bricks. It starts in a specific initial pattern: the Initial Tape. There is also a controller. It holds different coloured 3×2 bricks representing an initial state, an end state, a current state and has a set of other possible states (so coloured bricks) to substitute for the current state.

Why do we need a program?

As the machine runs it changes from one state to another, and inputs from or outputs to the tape. How it does that is governed by its Program. What is the new state, the new value and how does the tape head move? The program gives the answers. The program is just a set of instructions the machine must blindly follow. Each instruction is a single rule to follow. Each program is a set of such rules. In our Turing Machines, these rules are not set out in an explicit sequence as happens in a procedural program, say. It uses a different paradigm for what a program is. Instead at any time only one of the set of rules should match the current situation and that is the one that is followed next.

Individual Instructions

A single rule contains five parts: a Current State to match against, a Current Value under the Tape Head to match against, a New State to replace the existing one, and a New Value to write to the tape. Finally, it holds a Direction to Move the Tape Head (left or right or stay in the same place). An example might be:

  • Current State: ORANGE
  • Current Value: RED
  • New State: GREEN
  • New Value: BLUE
  • Direction: RIGHT

But what does a rule like this actually do?

What does it mean?

You can think of each instruction as an IF-THEN rule. The above rule would mean:

IF 

  •    the machine is currently in state ORANGE AND    
  •    the Tape Head points to RED 

THEN (take the following actions)

  •     change the state to GREEN, 
  •     write the new value BLUE on the tape AND THEN
  •     move the tape head RIGHT.

This is what a computer scientist would call the programming language Semantics. The semantics tell you what program instructions mean, so what they do.

Representing Instructions in Lego

We will use  a series of 5 bricks in a particular order to represent the parts of the rule. For example, we will use a yellow 3×2 brick in the first position of a rule to represent the fact that the rule will only trigger if the current state is yellow. A blue 2×2 brick in the second position will mean the rule will also only trigger if the current value under the tape head is blue. We will use a grey brick to mean an empty tape value. The third and fourth position will represent the new state and new value if the rule does trigger. To represent the direction to move we will use a 1×2 Red brick to mean move Right, and a 1×2 yeLLow brick to mean move Left. We will use a black 1×2 brick to mean do not move the tape head (mirroring the way we are also using black to mean do nothing in the sense of the special end state). The above rule would therefore be represented in Lego as below. 

A Turing machine instruction
Current state: orange
Currnent value red
New state green
New value blue
Direction red
A single instruction for a Lego Turing Machine

Notice we are using the same colour to represent different things here. The representation is the colour combined with the size of brick and position in the rule. So a Red brick can mean a red state (a red 3×2 brick) or a red value (a red 2×2 brick) or move right (a red 1×2 brick).

Lego programs

That is what a rule, so single Turing Machine instruction, looks like. Programs are just a collection of such rules: so a series of lines of bricks.

Suppose we have a Turing machine with two states (Red and Orange) and two values on the tape (Blue or Empty), then a complete program would have 4 rules, one for each possible combination. We have given one example program below. If there were more states or more possible data values then the program would be correspondingly bigger to cover all the possibilities.

A Turing Machine Program
Red-Blue -> Red-Blue-Red
Red-Grey -> Orange-Blue-Yellow
Orange-Blue -> Orange-Blue-Yellow
Orange-Grey -> Black-Grey-Red
A 4 instruction Turing Machine Program for a Turing Machine with two states (Red, Orange) and two data values (Blue, Empty)

A Specific Turing Machine

Exactly what it does will depend on its input – the initial tape it is given to process, as well as the initial state and where the tape head initially points to. Perhaps you can work out what the above program does given a tape with an empty value followed by a series of three blue bricks (and then empty data values off to infinity (the blank value is the only value that is allowed to appear an infinite number of times on an initial tape) and the Head pointing to the rightmost blue brick value. The initial state is red. See the Lego version of this specific machine below.

A Turing Machine with Tape, Controller and Program.
A full Turing Machine ready to execute.

Note something we have glossed over. You also potentially need an infinite number of bricks of each value that is allowed on the tape. We have a small pile, but you may need that Lego factory we mentioned previously, so that as the Turing Machine runs you always have a piece to swap on to the machine tape when needed. Luckily, for this machine a small number of bricks should be enough (as long as you do not keep running it)!

What does this Turing Machine do? We will look at what it does and how to work it out in a future article. In the meantime try and work out what it does with this tape, but also what it does if the tape has more or less blue bricks in a row on it to start with (with everything else kept the same).

Note that, to keep programs smaller, you could have a convention that if no rule fits a situation then it means the program ends. Then you could have fewer rules in some programs. However,  that would just be shorthand for there being extra rules with black new states, the tape being left alone, and the tape head moves right. In real programming, it is generally a good idea to ALWAYS be explicit about what you intend the program to do, as otherwise it is an easy way for bugs to creep in, for example, because you just forgot to say in some case.

Alan Turing invented Turing Machines before any computer existed. At the time a “computer” was a person who followed rules to do calculations (just like you were taught the rules to follow to do long multiplication at primary school, for example). His idea was therefore that a human would follow the rules in a Turing Machine program, checking the current state and value under the tape head, and changing the state, the value on the tape and the movement of the head. A person provides the power and equivalent of a robotic arm that follows the underlying Turing Machine algorithm: the Turing Machine algorithm that if followed causes each Turing Machine’s program to execute.

If a human animating the machine was good enough for Turing, it is good enough for us, so that is how our Lego Turing Machines will work. Your job will be to follow the rules and so operate the machine. Perhaps, that is exactly what you did to work out what the program above does!

Next we will look at how to work out what a Turing Machine does. Then it will be time to write, then run, some Turing Machine programs of your own…


More Lego Computer Science

Image shows a Lego minifigure character wearing an overall and hard hat looking at a circuit board, representing Lego Computing
Image by Michael Schwarzenberger from Pixabay

Part of a series featuring featuring pixel puzzles,
compression algorithms, number representation,
gray code, binary and computation.

Related Magazines …

cs4fn issue 14 cover

More on …


EPSRC supports this blog through research grant EP/W033615/1, The Lego Computer Science post was originally funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.

Eggheads: helping us to visualise objects and classes

by Daniel Gill, Queen Mary University of London

Past CS4FN articles have explored object-oriented programming through self-aware pizza and Strictly Come Dancing judges. But, if you’re one of those people who like to learn visually, it can be challenging to imagine what an object or class looks like. This article will hopefully help you to both think more about what makes this paradigm so useful as well as to give you a way to visualise objects.

Ada the Egghead
by Daniel Gill

To begin this adventure, I’d like to introduce Ada. Ada is an example of the newly discovered species egghead. Every egghead has very distinctive hair and eye colours. For example, Ada’s hair is a delightfully bright pink, and their eyes, a deep red. Despite their appearance, the egghead has a vicious roar intended to ward off predators, or indeed poachers.

Classes

As computer scientists, we might want to represent different eggheads in a program, but we don’t really want to store information about the eggheads with a written description or an image, because this would be harder than need-be for a computer to ‘understand’ (or rather to process as they don’t understand as such). Instead, we can store lots of individual features together, so that the computer can find out exactly what it needs from each egghead.

Egghead class

One way to achieve this is by using a class. A class is a template which contains spaces for us to fill in details for the thing we want to represent. For the egghead, we might want to store data about their name, hair colour, and eye colour – then we can fill in the template for each of the eggheads we find. These individual features are often called attributes. In a program, these attributes would be represented with variables: a place where a value, a piece of data, is stored. We can visualise a class therefore as a box with gaps to fill in for the attributes like the one on the left.

From this image, you will see alongside the attributes, we also have an image of a button for roaring. As well as storing attributes, we also define behaviours. These are actions that we can perform on the thing being represented. We visualise any behaviour as such a button. For this example, we could imagine that pressing this button might provoke the egghead causing them to roar. In programming, a behaviour is represented by a procedure, some pre-defined code that when executed makes something happen. A button that causes something to happen is a simple way to visualise such procedures.

A key point to realise is that this class is simply a template – it isn’t storing any information, nor will the roar button work. We have no actual eggheads yet… That is where objects come in.

Objects

You may have noticed that I have been using the word thing to represent the actual thing (here eggheads) that we are representing with the class. This is to avoid using the word object, which has its own special meaning, in, you guessed it, object-oriented programming. If we want to actually use our class, we need to make an instance. That just means filling in the relevant details about the specific egghead we want to store. This instance is called an object.

Let’s imagine we want to store a representation of Ada in our program. We would take an instance of the Egghead class and fill in their details. The resulting object would represent Ada, whereas the class we started with represents all and any egghead that might ever exist. Below, you can see the objects for Ada and some of Ada’s friends; Alan and Edsger. We still visualise objects as boxes, just like classes, except now the gaps are all filled in.

Objects representing Ada, Alan and Edsger

We (or a computer) could even take the given features of Alan and Edsger and generate an image of what they might look like. We have everything we need here to create something that looks and behaves like an egghead. This method of storing data means that a program can take whatever information it might want directly from the object. Likewise, it can do the equivalent of pressing the roar button and make each individual egghead roar.

Hiding the Details

Trying to change Alan’s eye colour

One thing we should consider while making the class is the integrity of the data. In its current form, any other part of our program, or another program using our class, can directly edit the attributes stored. Another part of the program (perhaps representing a virtual world for a virtual egghead to live in) might accidentally change the eye colour attribute for Alan, for example. This wouldn’t change Alan’s actual eye colour (which couldn’t happen anyway!), so our data would then be wrong. We can’t have that!

We can fix this by hiding the eye colour from the rest of the program, so it is stored within the object, but not accessible outside of it. But we still need a way for the program to read it: for this we use a button in our picture of the Egghead class. The existence of the eye colour attribute can then only be seen by other parts of the program by a procedure that gets the eye colour. No similar procedure is given for changing the eye colour, so there is no way to do it by mistake. Let’s build this new version of our class.

(a) Our new class, (b) An object representing Edsger, with eye colour hidden, (c) Pressing GetEyes gives us the eye colour.

This concept of hiding details is sometimes called encapsulation or information hiding, but computer scientists disagree about what these terms refer to exactly. Encapsulation is broader in its meanings, whereas information hiding is closer to what we are trying to do here. This video by ArjanCodes (see below) explains this distinction further.

We could change our class to include this concept for the name and hair colour, too. Whilst it is entirely possible for these attributes to change, it turns out that it is a good idea to hide them too: so hide name and use SetName and GetName buttons. That allows us to control the type of data we have going into that attribute (for example, checking the given name isn’t a number, which as all egghead names are made of letters would be a mistake).

Where next?

Now we have a class that represents all eggheads, we can store the details of any new egghead efficiently and safely. Hold on… some last-minute breaking news: scientists have found a new sub-species of egghead they are calling a rainbow egghead. All rainbow eggheads have rainbow hair, and a unique roar. Next time, we’ll use the concept of inheritance to give a more efficient way to write programs that store information about eggheads.

Image credits: all images were created by the author, Daniel Gill.


EPSRC supports this blog through research grant EP/W033615/1.

Equality, diversity and inclusion in the R Project: collaborative community coding & curating with Dr Heather Turner

You might not think of a programming language like Python or Scratch as being an ‘ecosystem’ but each language has its own community of people who create and improve its code, flush out the bugs, introduce new features, document any changes and write the ‘how to’ guides for new users. 

The logo for the R project.

R is one such programming language. It’s named after its two co-inventors (Ross Ihaka and Robert Gentleman) and is used by around two million people around the world. People working in all sorts of jobs and industries (for example finance, academic research, government, data journalists) use R to analyse their data. The software has useful tools to help people see patterns in their data and to make sense of that information. 

It’s also open source which means that anyone can use it and help to improve it, a bit like Wikipedia where anyone can edit an article or write a new one. That’s generally a good thing because it means everyone can contribute but it can also bring problems. Imagine writing an essay about an event at your school and sharing it with your class. Then imagine your classmates adding paragraphs of their own about the event, or even about different events. Your essay could soon become rather messy and you’d need to re-order things, take bits out and make sure people hadn’t repeated something that someone had already said (but in a slightly different way). 

When changes are made to software people also want to keep a note not just of the ‘words’ added (the code) but also to make a note of who added what and when. Keeping good records, also known as documentation, helps keep things tidy and gives the community confidence that the software is being properly looked after.

Code and documentation can easily become a bit chaotic when created by different people in the community so there needs to be a core group of people keeping things in order. Fortunately there is – the ‘R Core Team’, but these days its membership doesn’t really reflect the community of R users around the world. R was first used in universities, particularly by more privileged statistics professors from European countries and North America (the Global North), and so R’s development tended to be more in line with their academic interests. R needs input and ideas from a more diverse group of active developers and decision-makers, in academia and beyond to ensure that the voices of minoritised groups are included. Also the voices of younger people, particularly as many of the current core group are approaching retirement age.

Dr Heather Turner from the University of Warwick is helping to increase the diversity of those who develop and maintain the R programming language and she’s been given funding by the EPSRC* to work on this. Her project is a nice example of someone who is bringing together two different areas in her work. She is mixing software development (tech skills) with community management (people skills) to support a range of colleagues who use R and might want to contribute to developing it in future, but perhaps don’t feel confident to do so yet

Development can involve things like fixing bugs, helping to improve the behaviour or efficiency of programs or translating error messages that currently appear on-screen in the English language into different languages. Heather and her colleagues are working with the R community to create a more welcoming environment for ‘newbies’ that encourages participation, particularly from people who are in the community but who are not currently represented or under-represented by the core group and she’s working collaboratively with other community organisations such as R-Ladies, LatinR and RainbowR. Another task she’s involved in is producing an easier-to-follow ‘How to develop R’ guide.

There are also people who work in universities but who aren’t academics (they don’t teach or do research but do other important jobs that help keep things running well) and some of them use R too and can contribute to its development. However their contributions have been less likely to get the proper recognition or career rewards compared with those made by academics, which is a little unfair. That’s largely because of the way the academic system is set up. 

Generally it’s academics who apply for funding to do new research, they do the research and then publish papers in academic journals on the research that they’ve done and these publications are evidence of their work. But the important work that supporting staff do in maintaining the software isn’t classified as new research so doesn’t generally make it into the journals, so their contribution can get left out. They also don’t necessarily get the same career support or mentoring for their development work. This can make people feel a bit sidelined or discouraged. 

Logo for the Society of Research Software Engineering

To try and fix this and to make things fairer the Society of Research Software Engineering was created to champion a new type of job in computing – the Research Software Engineer (RSE). These are people whose job is to develop and maintain (engineer) the software that is used by academic researchers (sometimes in R, sometimes in other languages). The society wants to raise awareness of the role and to build a community around it. You can find out what’s needed to become an RSE below. 

Heather is in a great position to help here too, as she has a foot in each camp – she’s both an Academic and a Research Software Engineer. She’s helping to establish RSEs as an important role in universities while also expanding the diversity of people involved in developing R further, for its long-term sustainability.

Further reading

*Find out more about Heather’s EPSRC-funded Fellowship: “Sustainability and EDI (Equality, Diversity, and Inclusion) in the R Project” https://gtr.ukri.org/projects?ref=EP%2FV052128%2F1 and https://society-rse.org/getting-to-know-your-2021-rse-fellows-heather-turner/ 

Find out more about the job of the Research Software Engineer and the Society of Research Software Engineering https://society-rse.org/about/ 

Example job packs / adverts

Below are some examples of RSE jobs (these vacancies have now closed but you can read about what they were looking for and see if it might interest you in the future).

Note that these documents are written for quite a technical audience – the people who’d apply for the jobs will have studied computer science for many years and will be familiar with how computing skills can be applied to different subjects.

1. The Science and Technology Facilities Council (STFC) wanted four Research Software Engineers (who’d be working either in Warrington or Oxford) on a chemistry-related project (‘computational chemistry’ – “a branch of chemistry that uses computer simulation to assist in solving chemical problems”) 

2. The University of Cambridge was looking for a Research Software Engineer to work in the area of climate science – “Computational modelling is at the core of climate science, where complex models of earth systems are a routine part of the scientific process, but this comes with challenges…”

3. University College London (UCL) wanted a Research Software Engineer to work in the area of neuroscience (studying how the brain works, in this case by analysing the data from scientists using advanced microscopy).


EPSRC supports this blog through research grant EP/W033615/1.

The first computer wizard

by Paul Curzon, Queen Mary University of London

A rainbow coloured checkers board

Christopher Strachey did a series of firsts in computer programming, and that was just when he was playing.

With father a cryptographer, mother a suffragist, Christopher Strachey was a school teacher when he first started ‘playing’ with computers in the early 1950s. He had been given the chance to write programs, first for the National Physical Laboratories’ ACE computer and then the Manchester Mark 1: two of the earliest working computers in the world. The range of things he achieved is amazing. He probably created the first serious computer game you could play against (a draughts playing game), the first recorded computer music, the first “creative” program (a love letter writing program) … and he was just enjoying himself.

He went on to do serious computing, becoming an early computer consultant and later led the Oxford University Programming Research Group. He invented the idea of time-sharing computers, developed the CPL language (the ancestor of C and so many modern programming languages, so has had a powerful effect on all subsequent programming language design). Perhaps most notably, with Dana Scott he pioneered the idea of using maths to describe the meaning of programming languages, called denotational semantics. Oh, and he was a wizard debugger too, famous for quickly debugging his own and other people’s programs. He achieved all of this despite poor performance at school and university when younger, and despite suffering a nervous breakdown when at university that interrupted his studies. It has been suggested that the breakdown might have been due to him coming to terms with the fact that he was homosexual: now legal, homosexuality was then illegal in the UK.


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Fran Allen: Smart Translation

Cars making light pattterns at night

Image by Светлана from Pixabay
Image by Светлана from Pixabay

by Paul Curzon, Queen Mary University of London
(Updated from the archive)

Computers don’t speak English, or Urdu or Cantonese for that matter. They have their own special languages that human programmers have to learn if they want to create new applications. Even those programming languages aren’t the language computers really speak. They only understand 1s and 0s. The programmers have to employ translators to convert what they say into Computerese (actually binary): just as if I wanted to speak with someone from Poland, I’d need a Polish translator. Computer translators aren’t called translators though. They are called ‘compilers’, and just as it might be a Pole who translated for me into Polish, compilers are special programs that can take text written in a programming language and convert it into binary.

The development of good compilers has been one of the most important advancements from the early years of computing and Fran Allen, one of the star researchers of computer giant, IBM, was awarded the ‘Turing Prize’ for her contribution. It is the Computer Science equivalent of a Nobel Prize. Not bad given she only joined IBM to clear her student debts from University.

Fran was a pioneer with her groundbreaking work on ‘optimizing compilers’. Translating human languages isn’t just about taking a word at a time and substituting each for the word in the new language. You get gibberish that way. The same goes for computer languages.

Things written in programming languages are not just any old text. They are instructions. You actually translate chunks of instructions together in one go. You also add a lot of detail to the program in the translation, filling in every little step.

Suppose a Japanese tourist used an interpreter to ask me for directions of how to get to Sheffield from Leeds. I might explain it as:

“Follow the M1 South from Junction 43 to Junction 33”.

If the Japanese translator explained it as a compiler would they might actually say (in Japanese):

“Take the M1 South from Junction 43 as far as Junction 42, then follow the M1 South from Junction 42 as far as Junction 41, then follow … from Junction 34 as far as Junction 33”.

Computers actually need all the minute detail to follow the instructions.

The most important thing about computer instructions (i.e., programs) is usually how fast following them leads to the job getting done. Imagine I was on the Information desk at Heathrow airport and the tourist wanted to get to Sheffield. I’ve never done that journey. I do know how to get from Heathrow to Leeds as I’ve done it a lot. I’ve also gone from Leeds to Sheffield a lot, so I know that journey too. So the easiest way for me to give instructions for getting from London to Sheffield, without much thought and be sure it gets the tourist there might be to say:

Go from Heathrow to Leeds:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Leeds at Junction 43

Then go from Leeds to Sheffield:

  1. Take the M1 South to Sheffield at Junction 33

That is easy to write and made up of instructions I’ve written before perhaps. Programmers reuse instructions like this a lot – it both saves their time and reduces the chances of introducing mistakes into the instructions. That isn’t the optimum way to do the journey of course. You pass the turn off for Sheffield on the way up. An optimizing compiler is an intelligent compiler. It looks for inefficiency and actually converts it into a shorter and faster set of instructions. The Japanese translator, if acting like an optimizing compiler, would actually remove the redundant instructions from the ones I gave and simplify it (before converting it to all the junction by junction detailed steps) to:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Sheffield Junction 33

Much faster! Much more intelligent! Happier tourists!

Next time you take the speed of your computer for granted, remember it is not just that fast because the hardware is quick, but because, thanks to people like Fran Allen, the compilers don’t just do what the programmers tell them to do. They are far smarter than that.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Object-oriented pizza at the end of the universe

An eclipse halo. looking like a blank pizza with a spark of life triggering it to make itself.
Image by ipicgr from Pixabay
Image by ipicgr from Pixabay 

by Paul Curzon, Queen Mary University of London

(Based on a section from Computing Without Computers, a free book by Paul to help struggling students understand programming concepts).

Object-oriented programming is a popular kind of programming. To understand what it is all about it can help to think about cooking a meal (Hitchhiker’s Guide to the Galaxy style) where the meal cooks itself.

People talk about programs being like recipes to follow. This can help because both programs and recipes are sets of instructions. If you follow the instructions precisely in the right order, it should lead to the intended result (without you needing any thought of how to do it yourself).

That is only one way of thinking about what a program is, though. The recipe metaphor corresponds to a style of programming called procedural programming. Another completely different way of thinking about programs (a different paradigm) is object-oriented programming. So what is that about if not recipes?

In object-oriented programming, programmers think of a program, not as a series of recipes (so not sets instructions to be followed that do distinct tasks) but as a series of objects that send messages to each other to get things done. Different objects also have different behaviours – different actions they can perform. What do we mean by that? That is where The Hitchhiker’s Guide to the Galaxy may help.

In the book “The Restaurant at the End of the Universe”, by Douglas Adam, part of the Hitchhiker’s Guide to the Galaxy series, genetically modified animals are bred to delight in being your meal. They take great personal pride in being perfectly fattened and might suggest their leg as being particularly tasty, for example.

We can take this idea a little further. Imagine a genetically engineered future in which animals and vegetables are bred to have such intelligence (if you can call it that) and are able to cook themselves. Each duck can roast itself to death or alternatively fry itself perfectly. Now, when a request comes in for duck and mushroom pizza, messages go to the mushrooms, the ducks, etc and they get to work preparing themselves as requested by the pizza base, who on creation and addition of the toppings, promptly bakes itself in a hot oven as requested. This is roughly how an object-oriented programmer sees a program. It is just a collection of objects come to life. Each different kind of object is programmed with instructions about all the operations that it can perform on itself (its behaviours). If such an operation is required, a request goes to the object itself to do it.

Compare these genetically modified beings to a program, which could be to control a factory making food, say. In the procedural programming version we write a program (or recipe) for duck and mushroom pizza, that set out the sequence of instructions to follow. The computer, acting as a chef, works down the instructions in turn. The programmer splits the instructions into separate sets to do different tasks: for making pizza dough, adding all the toppings, and so on. Specific instructions say when the computer chef should start following new instructions and return to previous tasks to continue with old ones.

Now, following the genetically-modified food idea instead, the program is thought of as separate objects, one for the pizza base, one for the duck one for each mushroom, so the programmer has to think in terms of what objects exist and what their properties and behaviours are. She writes instructions (the program) to give each group of objects their specific behaviours (so a duck has instructions for how to roast itself, instructions for how to tear itself into pieces, for how to add its pieces on to the pizza base; a mushroom has instructions for how to wash itself, slice itself, and so on). Parts of those behaviours the programmer programs are instructions to send messages to other objects to get things done: the pizza base object, tells the mushroom objects and duck object to get their act together and prepare themselves and jump on top, for example.

This is a completely different way to think of a program based on a completely different way of decomposing it. Instead of breaking the task into subtasks of things to do, you break it into objects, separate entities that send messages to each other to get things done. Which is best depends on what the program does, but for many kinds of tasks the object-oriented approach is a much more natural way to think about the problem and so write the program.

So ducks that cook themselves may never happen in the real universe (I hope), but they could exist in the programs of future kitchens run by computers if the programmers use object-oriented programming.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.