Coordinate conundrum puzzles and vector graphics

A computer with arms doing a coordinate conundrum on a pad of paper

Image by Samson Vlad from Pixabay modified by CS4FN

One way computers store images is as a set of points (as coordinates) that make up lines and shapes. This is the basis for the kind of computer graphics called Vector Graphics. You can explore the idea by doing coordinate conundrum puzzles. A coordinate conundrum is just a Vector Graphics Image to be drawn on paper.They are join-the-dot puzzles based on the idea. Can you recreate the original drawing?

Recreate a drawing from coordinates

Points on a grid can be represented by pairs of numbers called coordinates. The first, x coordinate, says how far to go along horizontally. The second, y coordinate, says how far to go up, vertically. The numbers along the axes (along the bottom and up the side) of the grid give the distance in that direction. Draw the point at the place you end up at! So for example the coordinate (4,5) means go right from the origin 4 and up 5 and plot the point there.

Grid showing point (4,5) as 4 right and 5 up.
Image by CS4FN

You can join any two coordinates with lines. If a series of lines join back to the original one then it make a shape (a polygon), which can be coloured in. For example, if we plot the points (4,5), (7,7 and (8,4) drawing lines between them and back to (4,5) we make a triangle.

A triangle marked out in coordinates
Image by CS4FN

From 4 points we could define a square, rectangle or more general quadrilateral shape and so on.

So from a set of instructions telling you where to plot points, you can create a picture out of all the shapes and lines that make up the picture, giving colours for the lines or shapes.

This (and so these puzzles) is the basis of how programs in the language SVG (Scalable Vector Graphics) work to store a drawing as the instructions needed to recreate it. Drawing programs often store illustrations that the artists using them draw as an SVG program.

How to solve them

Each picture is made of shapes, each given by a colour and a list of the coordinates of its vertices (corners). For each shape: 

1. Plot the list of (x,y) coordinates on the grid as dots. 

2. Join the dots (which start and end at the same place) to make the shape. 

3. Colour the shape you have made with the colour at the start of the list. 

So, for example, if the first instruction starts: red (5,24) … that means plot the point 5 along and 24 up. It starts a new shape, coloured red, that ends at the same point.

If the series of points do not join back to the start then they represent coloured lines rather than a coloured shape,

Example: Semaphore flag

Blank 10x10 grid
Image by CS4FN

Here is a simple example to draw a red and yellow semaphore flag, given the shapes:

  • red (0,10) (10,10) (10,0) and back to (0,10)
  • yellow (0,10) (10,0) (0,0) and back to (0,10)

From this we can draw the picture.

Top right triangle now coloured red with (0,10), (10,10) and (10,0) plotted, joined by lines and the shape coloured.
Image by CS4FN

First we plot the points given for the red shape (a triangle), join the dots and colour it in.

  • red (0,10) (10,10) (10,0) and back to (0,10)
Bottom rightt triangle now coloured yellow with (0,10), (10,0) and (0,0) plotted, joined by lines and the shape coloured.
Image by CS4FN

Then we plot the points given for the yellow shape (another triangle), join the dots and colour it in.

  • yellow (0,10) (10,0) (0,0) and back to (0,10)

Do some puzzles yourself

Try the coordinate conundrum puzzles on our puzzle page either by printing off the puzzle sheet or drawing on squared paper. Then read on to find out a little more the advantages of vector graphics.

From straight lines to curves

In these puzzles we have only used straight lines between points, but we could also include commands to create circles and curved lines between points based on the mathematical equation for the curve. SVG, the vector graphic programming language, has instructions for a variety of more complex kinds of lines and shapes like that.

Advantages and disadvantages of Vector Graphics

Recording images in this way as points, lines and shapes has several advantages over storing images as pixels:

  • we generally need far less space to store the image as we do not need to store a colour for thousands or even millions of pixels;
  • the images are mathematically accurate – a line is represented as a perfect straight line not a pixelated (staircase-like) line;
  • the images are scalable, meaning you can increase the size of an image, zooming in just by multiplying all the numbers involved by a scale factor (which is just a number giving the magnification you want). The resulting image is still mathematically perfect with straight lines still straight lines, for example, just bigger. For example, suppose we want to make a semaphore flag twice the size of our original. We just multiply all points by 2, for example giving red (0,20) (20,20) (20,0) and back to (0,20); yellow (0,20) (20,0) (0,0) and back to (0,20) and it gives an identical picture, just bigger. (Try plotting it and see!)

Disadvantages are:

  • Vector graphics are not a good way to represent photographs – digital cameras record the light at lots of points corresponding to pixels so naturally are stored as pixels (numbers that give the colour in that small square of the image). Trying to convert a photo to a vector image would be hard (needing algorithms to recognise shapes, for example). It would be a less natural and less accurate way of doing so.
  • With computer memory cheap, the advantages of using less storage are less important than they once were.

SVG: a graphics programming language

The programming language SVG records drawings in this way as instructions to draw shapes and lines based on points. It has some difference to our instructions though. For example the y-axis coordinates start at the top and work down. The principles are the same though, it is only the detail that differs.

Paul Curzon, Queen Mary University of London

More on …

Magazines …



Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Mary Ann Horton and the invention of email attachments

Mary Ann Horton was transitioning to female at the time that she made one of her biggest contributions to our lives with a simple computer science idea with a big impact: a program that allowed binary email attachments.

Now we take the idea of sending each other multimedia files – images, video, sound clips, programs, etc for granted, whether by email or social networks, Back in the 1970s, before even the web had been invented, people were able to communicate by email, but it was all text. Email programs worked on the basis that people were just sending words, or more specifically streams of characters, to each other. An email message was just a long sequence of characters sent over the Internet. Everything in computers is coded as binary: 1s and 0s, but text has a special representation. Each character has its own code of 1s and 0s, that can also be thought of as a binary number, but that can be displayed as the character by programs that process it. Today, computers use a universally accepted code called Unicode, but originally most adopted a standard code called ASCII. All these codes are just allocations of patterns of 1s and 0s to each character. In ASCII, ‘a’ is represented by 1100001 or the number 97, whereas A is 1000001 or number 65, for example. These are only 7 bits long and as computers store data in bytes of 8 bits at a time this means that not all patterns of binary (so representable numbers) correspond to one of the displayable characters that email messages were expected to contain by the programs that processed them.

That is fine if all you have done is used programs like text editors, that output characters so you are guaranteed to be sending printable characters. The problem was other kinds of data whether images or runnable programs, are not stored as sequences of characters. They are more general binary files, meaning the data is long sequences of byte-sized patterns of 1s and 0s and what those 1s and 0s meant depended on the kind of data and representation used. If email programs were given such data to send, pass on or receive, they would reject or more likely mangle it as not corresponding to characters they could display. The files didn’t even have to be non-character formats, as at the time some computer systems used a completely different code for characters. This meant text emails could also be mangled just because they passed through a computer using a different format of character.

Mary Ann realised that this was all too restrictive for what people would be needing computers to do. Email needed to be more flexible. However, she saw that there was a really easy solution. She wrote a program, called uuencode that could take any binary file and convert it to one that was slightly longer but contained only characters. A second program she wrote, called uudecode converted these files of characters back to the original binary file to be saved by the receiving email program exactly it was originally on the source program.

All the uuencode program did was take 3 bytes (24 bits) of the binary file at a time, split them into groups of 6 bits so effectively representing a number from 0 to 63, add 32 to this number so the numbers are now in the range 32 to 95 and those are the numbers so binary patterns of the printable characters that the email programs expected. Each three bytes were now 4 printable characters. These could be added to the text of an email, though with a special start and end sequence included to identify it as something to decode. uudecode just did this conversion backwards, turning each group of 4 characters back into the orginal three bytes of binary.

Email attachments had been born, and ever since communication programs, whether email, chat or social media, have allowed binary files, so multimedia, to be shared in similar ways. By seeing a coming problem, inventing a simple way to solve it and then writing the programs, Mary Ann Horton had made computers far more flexible and useful.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …

cs4fn issue 14 cover

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

From Egyptian Survey puzzles to computational thinking

One way to use logical thinking is to deduce new facts but then turn them into IF-THEN rules. They tell us an action to do IF something is true. For example:  IF both cards are the same THEN shout SNAP! Once we have IF-THEN rules we can use them as the basis of a program. We can see how this works, and how it involves various computational thinking skills with a simple logical thinking puzzle.

An Egyptian Survey puzzle 

Image by CS4FN

Old records show that this area of the desert contains the tombs of 3 scribes (a large tomb covering 3 squares), 1 artisan (medium, 2 squares) and 1 merchant (small, 1 square).

A survey has gathered information of where the tombs could be. Each number tells you how many squares are part of a tomb in that row or column. 

Tombs are not adjacent (horizontally, vertically or diagonally).

Can you work out where all the tombs are without further digging?

Solving Egyptian Survey puzzles

The instructions of the puzzle give us some simple facts such as that the number at the end of the row tells us the number of squares in that row holding tombs.  On its own that does not tell us how to solve the puzzle. However, thinking logically about it we can draw simple logical conclusions so deduce new facts. For example, it is fairly simple to deduce the fact:

the number for a row being 0 IMPLIES there are no tombs in that row.

If we know there are no tombs in a row then we can mark each square of the row as empty. If we use X to mean nothing is in that square, then we can turn that deduced fact into an action. It means that we can do the following when trying to solve the puzzle:

IF the number on a row is 0 
THEN put an X in all the squares of that row.

With that rule we can start to solve the puzzle just by following the rule. Find a row numbered 0 and put Xs there. We can create a similar rule for columns. Applying both these rules to our puzzle we get:

Image by CS4FN

Can you work out more rules before reading on?

Rules, rules, rules

What happens if the number of a row is more than 0? Knowing the number alone doesn’t help us much. We need to combine it with other information. The top row of the puzzle has the number 4, for example, but one of the squares already has a cross in it. That means the remaining 4 squares all must have tombs, which we will mark T. We can turn that into a rule:

IF the number for a row is 4 AND there are 4 empty squares in that row and an X
THEN put a T in all the empty squares of that row.

We can make similar rules for each number 1 to 4. We can then create similar rules for columns. Applying them once each to our puzzle gives us:

Image by CS4FN

We could also make a more general version of this rule 

IF the number for a row is <n> AND there are only <n> empty squares in that row 
THEN put a T in all the empty squares of that row.

This is what computer scientists call generalisation: a part of computational thinking. Instead of having lots of rules (or lines of code) for lots of similar situations, we create one simple but general one that covers all the cases. We can of course apply rules more than once, so as you probably noticed we can apply our row rule once more. In effect our rules live inside a loop and we keep trying them in sequence for as long as we find a rule that makes a difference.

Image by CS4FN

Now one of the rows has the number 1, but we have already marked a tomb in a square of that row. That gives us another rule.

IF the number for a row is 1 AND there is already 1 tomb marked in that row
THEN put an X in all the empty squares of that row.

This gives us:

Image by CS4FN

Similar rules apply for other numbers so we could also make a general version of this rule too.

IF the number for a row is <n> AND there are already <n> tombs marked in that row
THEN put an X in all the empty squares of that row.

Now, applying a column version of the last general rule and we can mark an X in the last square for column with 2 tomb squares.

Image by CS4FN

We need one last rule for this puzzle:

IF the number for a row is <n> AND the number of spaces + the number of 0s is <n>
THEN put a T in all the empty squares of that row.

This is actually an even more general version of our second rule (it is the case where the number of Ts is 0, so could replace that rule with this new one.

 Applying it finally solves the puzzle:

Image by CS4FN

If we put our rules together then they become the basis of an algorithm (so program) for solving the puzzles and in creating them from deduced facts we are doing algorithmic thinking. IF-THEN instructions along with sequencing and loops are the basis of all programs. Here we create a sequence of the rules to try in order and put them inside a loop so that we keep trying them until none apply or we have solved the puzzle. There is a style of programming where all a program is is a series of IF-THEN rules – with looping happening implicitly.

Algorithms are just rules to follow that guarantee a result (here solving the puzzle). It is only an algorithm if it guarantees to always work. To solve any (solvable) Egyptian Survey puzzle like this we would need yet more rules: we would need more more rules for it to be an algorithm for solving all puzzles. Making sure algorithms cover all possibilities is one of the harder parts of algorithmic thinking – bugs in programs are often there because the programmer didn’t manage to cover every possibility. In our case we could write a program based on our limited rules. We would just need to include an extra rule that quits (admitting defeat and saying that the program cannot solve the puzzle) if no rule applies.

Perhaps you can work out all the rules (so an algorithm) needed to solve any of these puzzles! All you need are the instructions for the game, some logical thinking to deduce new facts, some algorithmic thinking to turn them into rules, and an ability to generalise so you have a small number of rules … computational thinking skills in fact.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …



Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pac-Man and Games for Girls

In the beginning video games were designed for boys…and then came Pac-Man.

Pac-man eating dots
Image by OpenClipart-Vectors from Pixabay

Before mobile games, game consoles and PC based games, video games first took off in arcades. Arcade games were very big earning 39 billion dollars at their peak in the 1980s. Games were loaded into bespoke coin-operated arcade machines. For a game to do well someone had to buy the machines, whether actual gaming arcades or bars, cafes, colleges, shopping malls, … Then someone had to play them. Originally boys played arcade games the most and so games were targeted at them. Most games had a focus on shooting things: games like asteroids and space invaders or had some link to sports based on the original arcade game Pong. Girls were largely ignored by the designers… But then came Pac-Man. 

Pac-Man, created by a team led by Toru Iwatani,  is a maze game where the player controls the Pac-Man character as it moves around a maze, eating dots while being chased by the ghosts: Blinky, Pinky, Inky, and Clyde. Special power pellets around the maze, when eaten, allow Pac-Man to chase the ghosts for a while instead of being chased.

Pac-Man ultimately made around $19 million dollars in today’s money making it the biggest money making video arcade game of all time. How did it do it? It was the first game that was played by more females than males. It showed that girls would enjoy playing games if only the right kind of games were developed. Suddenly, and rather ironically given its name, there was a reason for the manufacturers to take notice of girls, not just boys.

A Pac-man like ghost
Image by OpenClipart-Vectors from Pixabay

It revolutionised games in many ways, showing the potential of different kinds of features to give it this much broader appeal. Most obviously Pac-Man did this by turning the tide away from shoot-em up space games and sports games to action games where characters were the star of the game, and that was one of its inventor Toru Iwatani’s key aims. To play you control Pac-Man rather than just a gun, blaster, tennis racket or golf club. It paved the way for Donkey Kong, Super Mario, and the rest (so if you love Mario and all his friends, then thank Pac-Man). Ultimately, it forged the path for the whole idea of avatars in games too. 

It was the first game to use power ups where, by collecting certain objects, the character gains extra powers for a short time. The ghosts were also characters controlled by simple AI – they didn’t just behave randomly or follow some fixed algorithm controlling their path, but reacted to what the player does, and each had their own personality in the way they behaved.

Because of its success, maze and character-based adventure games became popular among manufacturers, but more importantly designers became more adventurous and creative about what a video game could be. It was also the first big step towards the long road to women being fully accepted to work in the games industry. Not bad for a character based on a combination of a pizza and the Japanese symbol for “mouth”.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

T. V. Raman and his virtual guide dogs

Guide dog silhouette with binary superimposed
Image by PC modifying dog from Clker-Free-Vector-Images from Pixabay

It’s 1989, a year with lots of milestones in Computer Science. In March, Tim Berners-Lee puts down in writing the idea of an “information management system”, later to become the world wide web. In July, Nintendo releases the Game Boy in North America selling 118 million units worldwide over its 14-year production.

Come autumn, a 24-year-old arrives in Ithaca, US, home of Cornell University. He would be able to feel the cool September air as it blows off Cayuga Lake, smell the aromas from Ithaca’s 190 species of trees, and listen to a range of genres in the city’s live music scene. However,, he couldn’t take in the natural beauty of the city in its entirety as he started his PhD … because he was blind. That did not stop him going on to have a gigantic impact on  the lives of blind and partially sighted people worldwide.

T. V. Raman was born in Pune, India, in 1965. He had been partially sighted from birth, but at the age of 14 he became blind due to a disease called glaucoma. Throughout his life, however, he has not let this stop him.

While he was partially sighted, he was able to read and write – but as his sight worsened, and with the help of his brother, mentors, and aides, he was still able to continue learning from textbooks, and solve problems which were read to him. At the height of its popularity, in the early 1980s, he also learned how to solve a specially customised Rubik’s cube, and could do so in about 30 seconds.

Raman soon developed an interest in mathematics, and around 1983 started studying for a Maths degree at the University of Pune. On finishing in 1987, he studied for a Masters degree at the Indian Institute of Technology Bombay, this time in Computer Science and Maths. It was with the help of student volunteers he was able to learn from textbooks and assistance with programming was provided by an able volunteer. 

Today people with no vision often use a screen reader to hear what is on a screen. It not everyone is lucky enough to have so much help as Raman and screen readers play the part of all those human volunteers who helped him. Raman himself played a big part in their development.

Modern screen readers allow you to navigate the screen part-by-part, with important information and content read to you. Many of these systems are built into operating systems, such as the Narrator in Windows (which uses a huge number of keyboard shortcuts), and Google TalkBack for Android devices (where rubbing the screen, vibration, and audio hints are used). These simpler screen readers might already be installed on your system – if so have a go with them!

While Raman was learning programming, such screen readers were still in their infancy. It was only in the 1980s that a team at IBM developed a screen reader for the command-line interface of the IBM DOS (which Raman would later use), and it would be many years before screen readers were available for the much more challenging graphical user interfaces we’re so accustomed to today.

It was at Cornell University where Raman settled on his career-long research interest: accessibility. He originally intended to do an Applied Mathematics PhD, but then discovered the need for ways to use speech technology to read complicated documents, especially those with embedded mathematics. For his dissertation, he therefore developed the Audio System for Technical Readings (ASTER) to solve the problem.

What he realised was that when looking at information visually our eyes are active taking in information from different places but the display is passive. With an audio interface this is reversed with the ear passive and the display actively choosing the order of information presented. This makes it impossible to get a high level view first and then dive into particular detail. This is a big problem when ‘reading’ maths by listening to it. His system solved the problem using audio formatting which allows the listener to browse the structure of information first.

He named this program after his first guide dog, Aster, which he obtained, alongside a talking computer, in early 1990. Both supported him throughout his PhD. For this work, he received the ACM Doctoral Dissertation Award, a prestigious yearly worldwide celebrating the best PhD dissertation in computer science and related fields.

Following on from this work, he developed a program called Emacspeak, an audio desktop, which, unlike a screen reader, takes existing programs and makes them work with audio outputs. It makes use of Emacs, a family of text editors (think notepad, but with lots more features), as well as a programming language called Lisp. Raman has continued to develop Emacspeak ever since and the program is often bundled within Linux operating system installations. Like ASTER, versions of this program are dedicated to his guide dogs.

Following his PhD, Raman worked briefly with Adobe Systems and IBM, but, since 2005, has worked with Google on auditory user interfaces, accessibility, and usability. In 2014, alongside Google colleagues, he published a paper on a new application called JustSpeak, a system for navigating the Android operating system with voice commands. He has also gone back to his roots, integrating mathematical speech into the ChromeVox, the screen reader built into Chromebook devices.

Despite growing up in a time of limited access to computers for blind and visually impaired people, Raman was able, with the help of his brother and student volunteers, to learn how to program, solve a Rubik’s cube, and solve complex maths problems. With early screen readers he was also able to build tools for fellow blind and visually impaired people, and then benefit himself from his own tools to achieve even more.

Guide dogs can transform the lives of blind and partially sighted people by allowing them to do things in the physical world that they otherwise could not do. T. V. Raman’s tools provide a similar transformation in the digital world, changing lives for the better.

– Daniel Gill, Queen Mary University of London

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Designing for autistic people

by Daniel Gill and Paul Curzon, Queen Mary University of London

What should you be thinking about when designing for a specific group with specific needs, such as autistic people? Queen Mary students were set this task and on the whole did well. The lessons though are useful when designing any technology, whether apps or gadgets.

A futuristic but complicated interface
A futuristic but complicated interface with lots of features: feature bloat?
Image by Tung Lam from Pixabay

The Interactive Systems Design module at QMUL includes a term-long realistic team interaction design project with the teaching team acting as clients. The topic changes each year but is always open-ended and aimed at helping some specific group of people. The idea is to give experience designing for a clear user group not just for anyone. A key requirement is always that the design, above all, must be very easy to use, without help. It should be intuitively obvious how to use it. At the end of the module, each team pitches their design in a short presentation as well as a client report.

This year the aim was to create something to support autistic people. What their design does, and how, was left to the teams to decide from their early research and prototyping. They had to identify a need themselves. As a consequence, the teams came up with a wide range of applications and tools to support autistic people in very different ways.

How do you come up with an idea for a design? It should be based on research. The teams had to follow a specific (if simplified) process. The first step was to find out as much as they could about the user group and other stakeholders being designed for: here autistic people and, if appropriate, their carers. The key thing is to identify their unmet goals and needs. There are lots of ways to do this: from book research (charities, for example, often provide good background information) and informally talking to people from the stakeholder group, to more rigorous methods of formal interviews, focus groups and even ethnography (where you embed yourself in a community).

Many of the QMUL teams came up with designs that clearly supported autistic people, but some projects were only quite loosely linked with autism. While the needs of autistic people were considered in the concept and design, they did not fully focus on supporting autistic people. More feedback directly from autistic people, both at the start and throughout the process, would have likely made the applications much more suitable. (That of course is quite hard in this kind of student role-playing scenario, though some groups were able to do so.) That though is key idea the module is aiming to teach – how important it is to involve users and their concerns closely throughout the design process, both in coming up with designs and evaluating them. Old fashioned waterfall models from software engineering, where designs are only tested with users at the end, are just not good enough.

From the research, the teams were then required to create design personas. These are detailed, realistic but fictional people with names, families, and lives. The more realistic the character the better (computer scientists need to be good at fiction too!) Personas are intended to represent the people being designed for in a concrete and tangible way throughout the design process. They help to ensure the designers do design for real people not some abstract tangible person that shape shifts to the needs of their ideas. Doing the latter can lead to concepts being pushed forward just because the designer is excited by their ideas rather than because they are actually useful. Throughout the design the team refer back to them – does this idea work for Mo and the things he is trying to do? 

An important part of good persona design lies around stereotypes. The QMUL groups avoided stereotypes of autistic people. One group went further, though: they included the positive traits that their autistic persona had, not just negative ones. They didn’t see their users in a simplistic way. Thinking about positive attributes is really, really important if designing for neurodivergent people, but also for those with physical disabilities too, to help make them a realistic person. That group’s persona was therefore outstanding. Alan Cooper, who came up with the idea of design personas, argued that stereotypes (such as a nurse persona being female) were good in that they could give people a quick and solid idea of the person. However, this is a very debatable view. It seems to go against the whole idea of personas. Most likely you miss the richness of real people and end up designing for a fictional person that doesn’t represent that group of people at all. The aim of personas is to help the designers see the world from the perspective of their users, so here of autistic people. A stereotype can only diminish that.

Another core lesson of the module is the importance of avoiding feature bloat. Lots of software and gadgets are far harder to use than need be because they are packed with features: features that are hardly ever, possibly never, used. What could have been simple to use apps, focusing on some key tasks, instead are turned into ‘do everything’ apps. A really good video call app instead becomes a file store, a messaging place, chat rooms, a phone booth, a calendar, a movie player, and more. Suddenly it’s much harder to make video calls. Because there are so many features and so many modes all needing their own controls the important things the design was supposed to help you do become hard to do (think of a TV remote control – the more features the more buttons until important ones are lost). That undermines the aim that good design should make key tasks intuitively easy. The difficulty when designing such systems is balancing the desire to put as many helpful features as possible into a single application, and the complexity that this adds. That can be bad for neurotypical people, who may find it hard to use. For neurodivergent people it can be much worse – they can find themselves overwhelmed. When presented with such a system, if they can use it at all, they might have to develop their own strategies to overcome the information overload caused. For example, they might need to learn the interface bit-by-bit. For something being designed specifically for neurodiverse people, that should never happen. Some of the applications of the QMUL teams were too complicated like this. This seems to be one of the hardest things for designers to learn, as adding ideas, adding features seems to be a good thing, it is certainly vitally important not to make this mistake if designing for autistic people. 

Perhaps one of the most important points that arose from the designs was that many of the applications presented were designed to help autistic people change to fit into the world. While this would certainly be beneficial, it is important to realise that such systems are only necessary because the world is generally not welcoming for autistic people. It is much better if technology is designed to change the world instead. 

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Testing AIs in Minecraft

by Paul Curzon, Queen Mary University of London

What makes a good environment for child AI learning development? Possibly the same as for human child learning development: Minecraft.

Lego is one of the best games to play for impactful learning development for children. The word Lego is based on the words Play and Well in Danish. In the virtual world, Minecraft has of course taken up the mantle. A large part of why they are wonderful games is because they are open-ended and flexible. There are infinite possibilities over what you can build and do. They therefore help encourage not just focussing on something limited to learn as many other games do, but support open-ended creativity and so educational development. Given how positive it can be for children, it shouldn’t be surprising that Minecraft is now being used to help AIs develop too.

Games have long been used to train and test Artificial Intelligence programs. Early programs were developed to play and ultimately beat humans at specific games like Checkers, Chess and then later Go. That mastered they started to learn to play individual arcade games as a way to extend their abilities. A key part of our intelligence is flexibility though, we can learn new games. Aiming to copy this, the AIs were trained to follow suit and so became more flexible, and showed they could learn to play multiple arcade games well. 

This is still missing a vital part of our flexibility though. The thing about all these games is that the whole game experience is designed to be part of the game and so the task the player has to complete. Everything is there for a reason. It is all an integral part of the game. There are no pieces at all in a chess game that are just there to look nice and will never, ever play a part in winning or losing. Likewise all the rules matter. When problem solving in real life, though, most of the world, whether objects, the way things behave or whatever, is not there explicitly to help you solve the problem. It is not even there just to be a designed distractor. The real world also doesn’t have just a few distractors, it has lots and lots. Looking round my living room, for example, there are thousands of objects, but only one will help me turn on the tv.

AIs that are trained on games may, therefore, just become good at working in such unreal environments. They may need to be told what matters and what to ignore to solve problems. Real problems are much more messy, so put them in the real world, or even a more realistic virtual world, to problem solve and they may turn out to be not very clever at all. Tests of their skills that are based on such tasks may not really test them at all.

Researchers at the University of Witwatersrand in South Africa decided to tackle this issue, but using yet another game: Minecraft.  Because Minecraft is an open-ended virtual world, tackling challenges created in it will involve working in a world that is much more than just about the problem itself. The Witwatersrand team’s resulting MinePlanner system is a collection of 45 challenges, some easy, some harder. They include gathering tasks (like finding and gathering wood) and building tasks (like building a log cabin), as well as tasks that include combinations of these things. Each comes in three versions. In the easy version nothing is irrelevant. The medium version contains a variety of extraneous things that are not at all useful to the task. The hard version is in a full Minecraft world where there are thousands of objects that might be used.

To tackle these challenges an AI (or human) needs to solve not just the complex problem set, but also work out for themselves what in the Minecraft world is relevant to the task they are trying to perform and what isn’t. What matters and what doesn’t?

The team hope that by setting such tests they will help encourage researchers to develop more flexible intelligences, taking us closer to having real artificial intelligence. The problems are proposed as a benchmark for others to test their AIs against. The Witwatersrand team have already put existing state-of-the-art AI planning systems to the test. They weren’t actually that great at solving the problems and even the best could not complete the harder tasks.

So it is back to school for the AIs but hopefully now they will get a much better, flexible and fun education playing games like Minecraft. Let’s just hope the robots get to play with Lego too, so they don’t get left behind educationally.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Neurodiversity and what it takes to be a good programmer

by Paul Curzon, Queen Mary University of London

A multicoloured jigsaw pattern ribbon
Image by Oberholster Venita from Pixabay

People often suggest neurodiverse people make good computer scientists. For example, one of the most famous autistic people, Temple Grandin, an academic at Colorado State University and animal welfre expert, has suggested programming is one of the jobs autistic people are potentially naturally good at (along with other computer science linked jobs) and that “Half the people in Silicon Valley probably have autism.”.. So what makes a good computer scientist? And why might people suggest neurodiverse people are good at it?

What makes a good programmer? Is it knowledge, skills or is it the type of person you are? It is actually all three though it’s important to realise that all three can be improved. No one is born a computer scientist. You may not have the knowledge, the skills or be the right kind of person now, but you can improve them all.

To be a good programmer, you need to develop specialist knowledge such as knowing what the available language constructs are, knowing what those constructs do, knowing when to use them over others, and so on. You also need to develop particular technical skills like an ability to decompose problems into sub-problems, to formulate solutions in a particular restricted notation (the programming language), to generalise solutions, and so on. However, you also need to become the right kind of person to do well as a programmer. 

Thinking about what kind of person makes a good programmer, to help my students work on them and so become better programers, I made a list of the attributes I associate with good student programmers. My list includes: attention to detail, an ability to think clearly and logically, being creative, having good spatial visualising skills, being a hard worker, being resilient to things going wrong so determined, being organised, being able to meet deadlines, enjoying problem solving, being good at pattern matching, thinking analytically and being open to learning new and different ways of doing things.

More recently, when taking part in a workshop about neurodiversity I was struck by a similar list we were given. Part of the idea behind ‘neurodiversity’ is that everyone is different and everyone has strengths and weaknesses. If you think of ‘disability’ you tend to think of apparent weaknesses. Those ‘weaknesses’ are often there because the world we have created has turned them into weaknesses. For example, being in a wheelchair makes it hard to travel because we have built a world full of steps, kerbs and cobbles, doors that are hard to manipulate, high counters and so on. If we were to remove all those obstacles, a wheelchair would not have to reduce your ability to get around. Thinking about neurodiversity, the suggestion is to think about the strengths that come with it too, not just the difficulties you might encounter because of the way we’ve made the world.

The list of strengths of neurodiverse people given at the workshop were: attention to detail, focussed interest, problem-solving, creative, visualising, pattern recognition. Looking further you find both those positives reinforced and new positives. For example, one support website gives the positives of being an autistic person as: attention to detail, deep focus, observation skills, ability to absorb and retain facts, visual skills, expertise, a methodological approach, taking novel approaches, creativity, tenacity and resilience, accepting of difference and integrity. Thinking logically is also often picked out as a trait that neurodiverse people are often good at. The similarity of these lists to my list of what kind of person my students should aim to turn themselves into is very clear. Autistic people can start with a very solid basis to build on. If my list is right, then their personal positives may help neurodiverse people to quickly become good programmers.

Here are a few of those positives others have picked out that neurodiverse people may have and how they relate to programming:

Attention to detail: This is important in programming because a program is all about detail, both in the syntax (not missing brackets or semicolons) but more importantly not missing situations that could occur so cases the program must cover. A program must deal with every possibility that might arise, not just some. The way it deals with them also matters in the detail. Poor programs might just announce a mistake was made and shut down. A good program will explain the mistake and give the user a way to correct it for example. Detail like that matters. Attention to detail is also important in debugging as bugs are just details gone wrong. 

Resilience and determination: Programming is like being on an emotional roller coaster. Getting a program right is full of highs and lows. You think it is working and then the last test you run shows a deep flaw. Back to the drawing board. As a novice learning it is even worse. The learning curve is steep as programming is a complex skill. That means there are lots of lows and seemingly insurmountable highs. At the start it can seem impossible to get a program to even compile never mind run. You have to keep going. You have to be determined. You have to be resilient to take all the knocks.

Focussed interest. Writing a program takes time and you have to focus. Stop and come back later and it will be so much harder to continue and to avoid making mistakes. Decomposition is a way to break the overall task into smaller subtasks, so methods to code, and that helps, once you have the skill. However, even then being able to maintain your focus to finish each method, so each subtask, makes the programming job much easier.

Pattern recognition: Human expertise in anything ultimately comes down to pattern matching new situations against old. It is the way our brains work. Expert chess players pattern match situations to tell them what to do, and so do firefighters in a burning building. So do expert programmers. Initially programming is about learning the meaning of programming constructs and how to use them, problem solving every step of the way. That is why the learning curve is so steep. As you gain experience though it becomes more about pattern matching: realising what a particular program needs at this point and how it is similar to something you have seen before then matching it to one of many standard template solutions. Then you just whip out the template and adapt it to fit. Spot something is essentially a search task and you whip out a search algorithm to solve it. Need to process a 2 dimensional array – you just need the rectangular for loop solution. Once you can do that kind of pattern matching, programming becomes much, much simpler.

Creativity and doing things in novel ways: Writing a program is an act of creation, so just like arts and crafts involves creativity. Just writing a program is one kind of creativity, coming up with an idea for a program or spotting a need no one else has noticed so you can write a program that fills that need requires great creativity of a slightly different kind. So does coming up with a novel solution once you have a novel problem. Developing new algorithms is all about thinking up a novel way of solving a problem and that of course takes creativity. Designing interfaces that are aesthetically pleasing but make a task easier to do takes creativity. If you can think about a problem in a different way to everyone else, then likely you will come up with different solutions no one else thought of.

Problem solving and analytical minds: Programming is problem solving on steroids. Being able to think analytically is an important part of problem solving and is especially powerful if combined with creativity (see above). You need to be able to analyse a problem, come up with creative solutions and be able to analyse what is the best way of solving it from those creative solutions. Being analytical helps with solid testing too.

Visual thinking: research suggests those with good visual, spatial thinking skills make good programmers. The reasons are not clear, but good programs are all about clear structure, so it may be that the ability to easily see the structure of programs and manipulate them in your head is part of it. That is part of the idea of block-based programming languages like Scratch and why they are used as a way into programming for young children. The structure of the program is made visual. Some paradigms of programming are also naturally more visual. In particular object-oriented programming sees programs as objects that send messages to each other and that is something that can naturally be visualised. As programs become bigger that ability to still visualise the separate parts and how they work as a whole  is a big advantage.

A methodological approach: Novice programmers just tinker and hack programs together. Expert programmers design them. Many people never seem to get beyond the hacking stage, struggling with the idea of following a method to design first, yet it is vital if you are to engineer serious programs that work. That doesn’t mean that programming is just following methods, tinkering can be part of the problem solving and coming up with creative ideas, but it should be used within rigorous methodology not instead of it. More time is also spent by good programming teams testing programs as writing them, and that takes rigorous methods to do well too. Software engineering is all about following rigorous methods precisely because it is the only way to develop real programs that work and are fit for purpose. Vast amounts of software written is never used because it is useless. Rigorous methods give a way to avoid many of the problems.

Logical thinking: Being able to think clearly and logically is core to programming. It combines some of the things above like attention to detail, and thinking clearly and methodologically. Writing programs is essentially the practice of applied logic as logic underpins the semantics (ie meaning) of programming languages and so programs. You have to think logically when writing programs, when testing them and when debugging them. Computers are machines built from logic and you need to think in part like a computer to get it to do what you want. A key part of good programming is doing code walkthroughs – as a team stepping through what a program does line by line. That requires clear logical thinking to think in teh steps the computer performs.

I could go on, but will leave it there. The positives that neurodiverse people might have are very strongly positives for becoming a good programmer. That is why some of the best students I’ve had the privilege to teach have been neurodiverse students.

Different people, neurodiverse or otherwise, will start with different positives, different weaknesses. People start in different places for different reasons, some ahead, some behind. I liked doing puzzles as a child, so spent my childhood devouring logic and algorithmic puzzles. That meant when I first tried to learn to program, I found it fairly easy. I had built important skills and knowledge and had become a good logical thinker and problem solver just for fun. I learnt to program for fun. That meant it was if I’d started way beyond the starting line in the race to become a programmer. Many neurodiverse people do the same, if for different reasons.

Other skills I’ve needed as a computer scientist I have had to work hard on and develop strategies to overcome. I am a shy introvert. However, I need to both network and give presentations as a computer scientist (and ultimately now I give weekly lectures to hundreds at a time as an academic computer scientist). For that I had to practice, learn theory about good presentation and perhaps most importantly, given how paralysing my shyness was, devise a strategy to overcome that natural weakness. I did find a strategy – I developed an act. I have a fake extrovert persona that I act out. I act being that other person in these situations where I can’t otherwise cope, so it is not me you see giving presentations and lectures but my fake persona. Weaknesses can be overcome, even if they mean you start far behind the starting line. Of course, some weaknesses and ways we’ve built the world mean we may not be able to overcome the problems, and not everyone wants to be a computer scientist anyway so has the desire to. What matters is finding the future that matches your positives and interests, and where you can overcome the weaknesses where you set your mind to it.

Programming is not about born talent though (nothing is). We all have strengths and weaknesses and we can all become better by practicing and finding strategies that work for us, building upon our strengths and working on our weaknesses, especially when we have the help of a great teacher (or have help to change the way the world works so the weaknesses vanish).

My list above gives some of the key personal characteristics you need to work on improving (however good at them you are or are not right now) If you do want to be a good programmer. Anyone can become better at programming than they are if they have that desire. What matters is that you want to learn, are willing to put the practice in, can develop strategies to overcome your initial weaknesses, and you don’t give up. Neurodiverse people often have a head start on those personal attributes for becoming good at new things too.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The top 10 bugs

by Paul Curzon, Queen Mary University of London

(updated from the archive)

Bugs are everywhere, but why not learn from the mistakes of others. Here are some common bugs with examples of how they led to it all going terribly wrong.

The bugs of others show how solid testing, code walkthroughs, formal reasoning and other methods for preventing bugs really matter. In the examples below the consequences were massive, but none of these bugs should have made it to final systems, whatever the consequences.

Here then is my personal countdown of the top 10 bugs to learn from.

BUG 10: Divide by Zero

USS Yorktown (from wikipedia)
USS Yorktown, Public Domain via wikimedia

The USS Yorktown was used as a testbed for a new generation of “smart” ship. On 21 September 1997, its propulsion system failed leaving it “dead in the water” for 3 hours. It tried to divide by zero after a crew member input a 0 where no 0 should have been, crashing every computer on the ship’s network.

Moral: Input validation matters a lot and there are some checks you should know should be done as standard.

BUG 9: Arithmetic Overflow

Boeing 787 Dreamliner
From Wikipedia Author
pjs2005 from Hampshire, UK
From Wikipedia Author pjs2005 from Hampshire, UK CC BY-SA 2.0

Keep adding to an integer variable and you run out of bits. Suddenly you have a small number not the bigger one expected. This was one bug in the Therac-25 radiation therapy machine that killed patients. The Boeing 787 Dreamliner had the same problem. Fly for more than 248 days and it would switch off.

Moral: Always have checks for overflow and underflow, and if it might matter then you need something better than a fixed but-length number.

BUG 8: Timing Problems

Three telephone handsets on pavement.
Image by Alexa from Pixabay

AT&T lost $60 million the day the phones died (all of them). It was a result of changing a few lines of working code. Things happened too fast for the program. The telephone switches reset but were told they needed to reset again before they’d finished, … and so on.

Moral: even small code changes need thorough testing…and timing matters but needs extra special care.

BUG 6.99999989: Wrong numbers in a lookup table

Intel Pentium chip
Konstantin Lanzet – CPU Collection Konstantin Lanzet CC BY-SA 3.0

Intel’s Pentium chip turned out not to be able to divide properly. It was due to a wrong entry in a lookup table. Intel set aside $475 million to cover replacing the flawed processors. Some chips were turned in to key rings.

Moral: Data needs to be thoroughly checked not just instructions.

BUG 6: Wrong units

Artist's impression of the Mars Climate orbiter from wikipedia
Mars Climate Orbiter. Public Domain via wikimedia.

The Mars Climate Orbiter spent 10 months getting to Mars …where it promptly disintegrated. It passed too close to the planet’s atmosphere. The programmers assumed numbers were in pound-force seconds when they were actually in newton-seconds.

Moral: Clear documentation (including comments) really does matter.

BUG 5: Non-terminating loop

Spinning pizza of death. Image Public Domain via wikimedia

The spinning pizza of death is common. Your computer claims to be working hard, and puts up a progress symbol like a spinning wheel…forever. There are lots of ways that this happens. The simple version is that the program has entered a loop in a way that means the test to continue is never false. This took on a greater spin in the Fujitsu-UK Post Office Horizon system where bugs triggered the biggest ever miscarriage of justice. Hundreds of post masters were accused of stealing money because Horizon said they had. One of many bugs was that mid-transaction, the Horizon terminal could freeze. However, while in this infinite loop hitting any key duplicated the transaction, subtracting money again for every key press, not just the once for the stamp being bought.

Moral: Clear loop structure matters – where loops only exit via of one clear (and reasoned correct) condition.

BUG 4: Storing a big number in a small space

Arianespace’s Ariane 5 rocket Photo Credit: (NASA/Chris Gunn) from wikipedia. Creative Commons Attribution 2.0 Generic

The first Ariane 5 rocket exploded at a cost of $500 million 40 seconds after lift-off. Despite $7 billion spent on the rocket, the program stored a 64 bit floating point number in to a variable that could only hold a 16 bit integer.

Moral: Strong type systems are there to help not hinder. Use languages without them at your peril.

BUG 3: Memory Leak

Memory leaks (forgetting to free up space when done with) are responsible for many computer problems. The Firefox browser had one. It was infamous because Firefox (implausibly) claimed their program had no memory leaks.

Moral: If your language doesn’t use a garbage collector then you need to be extra careful about memory management.

BUG 2: Null pointer errors

 Photograph by Rama, Wikimedia Commons, Cc-by-sa-2.0-fr

Tony Hoare who invented the null pointer (a pointer that points nowhere) called it his “billion-dollar” mistake because programmers struggle to cope with it. Null pointer bugs crash computers, give hackers ways in and generally cause chaos.

Moral: Avoid null pointers and do not rely on just remembering to check for them.

BUG 1: Buffer overflow

The Morris Worm, an early Internet worm, came close to shutting down the Internet. It used a buffer overflow bug in network software to move from computer to computer shutting them down. Data was stored in a buffer but store too much and it would just be placed in the next avaialbale location in memory so could overwrite the program with new code.

Moral: Array-like data structures need extra care. Always triple check bounds are correct and that the code ensures overflows cannot happen

BUG 0: Buffer overflow

Lift buttons 0, 1 2, 3
Image by Coombesy from Pixabay

Arrays in many languages start from position 0. This means the last position is one less than the length of the array. Get it wrong… as every novice (and expert) programmer does at some point … and you run off the end. Oddly (in Europe), we have no problem in a lift pressing 1 to go to the second floor up. In other situations, count from 0 and you do one too many things. Did I say 10 bugs…OOPS!

Moral: Think hard about every loop counter in your program.

The number 1 moral from all of this is that thorough testing matters a lot. Just trying a program a few times is not enough. In fact thorough testing is not enough either. You also need code walkthroughs, clear logical thinking about your code, formal reasoning tools where they exist, strong type systems, good commenting, and more. Most of all programmers need to understand their code will NOT be correct and they must put lots of effort into finding bugs if crucial ones are not to be missed. knowing the kinds of mistakes everyone makes, and being extra vigilant about them, is a good start.

And that is the end of my top 10 bugs…until the next arrogant, over-confident programmer causes the next catastrophe.

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Front cover of CS4FN issue 17 - Machines Making Medicine Safer

More on …


EPSRC supports this blog through research grant EP/W033615/1,

Do something computationally funny for money

A red nose
Image by CS4FN

It is Red nose day in the UK  the day of raising money for the comic relief charity by buying and wearing red noses and generally doing silly things for money.

Red noses are not just for red nose day though and if you ‘ve been supporting it every year, you possibly now have a lot of red noses like we do. What can you do with lots of red noses? Well one possibility is to count in red nose binary as a family or group of friends. (Order your red noses (a family pack has 4 or a school pack 25) from comic relief or make a donation to the charity there.)

Red nose binary

Let’s suppose you are a family of four. All stand in a line holding your red noses (you may want to set up a camera to film this). How many numbers can 4 red noses represent? See if you can work it out first. Then start counting:

  • No one wearing a red nose is 0,
  • the rightmost end person puts theirs on for 1,
  • they take it off and the next person puts theirs on for 2,
  • the first person puts theirs back on for 3,
  • the first two people take their noses off and the third person puts theirs on for 4
  • and so on…

The pattern we are following is the first (rightmost end) person changes their nose every time we count. The second person has the nose off for 2 then on for the next 2 counts. The third person changes theirs every fourth count (nose off for 4 then on for 4) and the last person changes theirs every eighth count (off for 8, on for 8). That gives a unique nose pattern every step of the way until eventually all the noses are off again and you have counted all the way from 0 to 15. This is exactly the pattern of binary that computers use (except they use 1s and 0s rather than wear red noses).

What is the biggest number you get to before you are back at 0? It is 15. Here is what the red nose binary pattern looks like.

The binary sequence in faces wearing red noses
Image by CS4FN

Try and count in red nose binary like this putting on and taking off red noses as fast as you can, following the pattern without making mistakes!

The numbers we have put at the top of each column are how much a red nose is worth in that column. You could write the number of the column on that person’s red nose to make this obvious. In our normal decimal way of counting, digits in each column are worth 10 times as much (1s 10s 100s, 1000s, etc) Here we are doing the same but with 2s (1s 2s 4s 8s etc). You can work out what a number represents just by adding that column number in if there is a red nose there. You ignore it if there is no red nose. So for example 13 is made up of an 8s red nose + a 4s red nose + a 1s red nose. 8 + 4 + 1 = 13.

13 in red nose binary with the 8, the 4 and the 1 red nose all worn.
Image by CS4FN

Add one more person (perhaps the dog if they are a friendly dog willing to put up with this sort of thing) with a red nose (now worth 16) to the line and how many more numbers does that now mean you can count up to? Its not just one more. You can now go through the whole original sequence twice once with the dog having no red nose, once with them having a red nose. So you can now count all the way from 0 to 31. Each time you add a new person (or pet*, though goldfish don’t tend to like it) with a red nose, you double the number you can count up to.

There is lots more you can do once you can count in red nose binary. Do red nose binary addition with three lines of friends with red noses, representing two numbers to add and compute the answer on the third line perhaps… for that you need to learn how to carry a red nose from one person to the next! Or play the game of Nim using red nose binary to work out your moves (it is the sneaky way mathematicians and computer scientists use to work out how to always win). You can even build a working computer (a Turing Machine) out of people wearing red noses…but perhaps we will save that for next year.

What else can you think of to do with red nose binary?

Paul Curzon, Queen Mary University of London

*Always make sure your pet (or other family member) has given written consent before you put a red nose on them for ethical counting.

More on computers and comedy

Magazines …

Front cover of CS4FN issue 29 – Diversity in Computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos