Object-oriented pizza at the end of the universe

Object-oriented programming is a popular kind of programming. To understand what it is all about it can help to think about cooking a meal (Hitchhiker’s Guide to the Galaxy style) where the meal cooks itself.

People talk about programs being like recipes to follow. This can help because both programs and recipes are sets of instructions. If you follow the instructions precisely in the right order, it should lead to the intended result (without you needing any thought of how to do it yourself).

That is only one way of thinking about what a program is, though. The recipe metaphor corresponds to a style of programming called procedural programming. Another completely different way of thinking about programs (a different paradigm) is object-oriented programming. So what is that about if not recipes?

In object-oriented programming, programmers think of a program, not as a series of recipes (so not sets instructions to be followed that do distinct tasks) but as a series of objects that send messages to each other to get things done. Different objects also have different behaviours – different actions they can perform. What do we mean by that? That is where The Hitchhiker’s Guide to the Galaxy may help.

In the book “The Restaurant at the End of the Universe”, by Douglas Adam, part of the Hitchhiker’s Guide to the Galaxy series, genetically modified animals are bred to delight in being your meal. They take great personal pride in being perfectly fattened and might suggest their leg as being particularly tasty, for example.

We can take this idea a little further. Imagine a genetically engineered future in which animals and vegetables are bred to have such intelligence (if you can call it that) and are able to cook themselves. Each duck can roast itself to death or alternatively fry itself perfectly. Now, when a request comes in for duck and mushroom pizza, messages go to the mushrooms, the ducks, etc and they get to work preparing themselves as requested by the pizza base, who on creation and addition of the toppings, promptly bakes itself in a hot oven as requested. This is roughly how an object-oriented programmer sees a program. It is just a collection of objects come to life. Each different kind of object is programmed with instructions about all the operations that it can perform on itself (its behaviours). If such an operation is required, a request goes to the object itself to do it.

Compare these genetically modified beings to a program, which could be to control a factory making food, say. In the procedural programming version we write a program (or recipe) for duck and mushroom pizza, that set out the sequence of instructions to follow. The computer, acting as a chef, works down the instructions in turn. The programmer splits the instructions into separate sets to do different tasks: for making pizza dough, adding all the toppings, and so on. Specific instructions say when the computer chef should start following new instructions and return to previous tasks to continue with old ones.

Now, following the genetically-modified food idea instead, the program is thought of as separate objects, one for the pizza base, one for the duck one for each mushroom, so the programmer has to think in terms of what objects exist and what their properties and behaviours are. She writes instructions (the program) to give each group of objects their specific behaviours (so a duck has instructions for how to roast itself, instructions for how to tear itself into pieces, for how to add its pieces on to the pizza base; a mushroom has instructions for how to wash itself, slice itself, and so on). Parts of those behaviours the programmer programs are instructions to send messages to other objects to get things done: the pizza base object, tells the mushroom objects and duck object to get their act together and prepare themselves and jump on top, for example.

This is a completely different way to think of a program based on a completely different way of decomposing it. Instead of breaking the task into subtasks of things to do, you break it into objects, separate entities that send messages to each other to get things done. Which is best depends on what the program does, but for many kinds of tasks the object-oriented approach is a much more natural way to think about the problem and so write the program.

So ducks that cook themselves may never happen in the real universe (I hope), but they could exist in the programs of future kitchens run by computers if the programmers use object-oriented programming.

Paul Curzon, Queen Mary University of London

This article was based on a section from Computing Without Computers, a free book by Paul to help struggling students understand programming concepts.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Stretching your keyboard – getting more out of QWERTY

by Jo Brodie, Queen Mary University of London

A QWERTY keyboard after smartphone keyboards starting with keys q w e r t y on the top row
A smartphone’s on-screen keyboard layout, called QWERTY after the first six letters on the top line. Image by CS4FN after smartphone QWERTY keyboards.

If you’ve ever sent a text on a phone or written an essay on a computer you’ve most likely come across the ‘QWERTY’ keyboard layout. It looks like this on a smartphone.

This layout has been around in one form or another since the 1870s and was first used in old mechanical typewriters where pressing a letter on the keyboard caused a hinged metal arm with that same letter embossed at the end to swing into place, thwacking a ribbon coated with ink, to make an impression on the paper. It was quite loud!

The QWERTY keyboard isn’t just used by English speakers but can easily be used by anyone whose language is based on the same A,B,C Latin alphabet (so French, Spanish, German etc). All the letters that an English-speaker needs are right there in front of them on the keyboard and with QWERTY… WYSIWYG (What You See Is What You Get).  There’s a one-to-one mapping of key to letter: if you tap the A key you get a letter A appearing on screen, click the M key and an M appears. (To get a lowercase letter you just tap the key but to make it uppercase you need to tap two keys; the up arrow (‘shift’) key plus the letter).

A French or Spanish speaking person could also buy an adapted keyboard that includes letters like É and Ñ, or they can just use a combination of keys to make those letters appear on screen (see Key Combinations below). But what about writers of other languages which don’t use the Latin alphabet? The QWERTY keyboard, by itself, isn’t much use for them so it potentially excludes a huge number of people from using it.

In the English language the letter A never alters its shape depending on which letter goes before or comes after it. (There are 39 lower case letter ‘a’s and 3 upper case ‘A’s in this paragraph and, apart from the difference in case, they all look exactly the same.) That’s not the case for other languages such as Arabic or Hindi where letters can change shape depending on the adjacent letters. With some languages the letters might even change vertical position, instead of being all on the same line as in English.

Early attempts to make writing in other languages easier assumed that non-English alphabets could be adapted to fit into the dominant QWERTY keyboard, with letters that are used less frequently being ignored and other letters being simplified to suit. That isn’t very satisfactory and speakers of other languages were concerned that their own language might become simplified or standardised to fit in with Western technology, a form of ‘digital colonialism’.

But in the 1940s other solutions emerged. The design for one Chinese typewriter avoided QWERTY’s ‘one key equals one letter’ (which couldn’t work for languages like Chinese or Japanese which use thousands of characters – impossible to fit onto one keyboard, see picture at the end!).

Rather than using the keys to print one letter, the user typed a key to begin the process of finding a character. A range of options would be displayed and the user would select another key from among them, with the options narrowing until they arrived at the character they wanted. Luckily this early ‘retrieval system’ of typing actually only took a few keystrokes to bring up the right character, otherwise it would have taken ages.

This is a way of using a keyboard to type words rather than letters, saving time by only displaying possible options. It’s also an early example of ‘autocomplete’ now used on many devices to speed things up by displaying the most likely word for the user to tap, which saves them typing it.

For example in English the letter Q is generally* always followed by the letter U to produce words like QUAIL, QUICK or QUOTE. There are only a handful of letters that can follow QU – the letter Z wouldn’t be any use but most of the vowels would be. You might be shown A, E, I or O and if you selected A then you’ve further restricted what the word could be (QUACK, QUARTZ, QUARTET etc).

In fact one modern typing system, designed for typists with physical disabilities, also uses this concept of ‘retrieval’, relying on a combination of letter frequency (how often a letter is used in the English language) and probabilistic predictions (about how likely a particular letter is to come next in an English word). Dasher is a computer program that lets someone write text without using a keyboard, instead a mouse, joystick, touchscreen or a gaze-tracker (a device that tracks the person’s eye position) can be used.

Letters are presented on-screen in alphabetic order from top to bottom on the right hand side (lowercase first, then upper case) and punctuation marks. The user ‘drives’ through the word by first pushing the cursor towards the first letter, then the next possible set of letters appear to choose from, and so on until each word is completed. You can see it in action in this video on the Dasher Interface.

Key combinations

The use of software to expand the usefulness of QWERTY keyboards is now commonplace with programs pre-installed onto devices which run in the background. These IMEs or Input Method Editors can convert a set of keystrokes into a character that’s not available on the keyboard itself. For example, while I can type SHIFT+8 to display the asterisk (*) symbol that sits on the 8 key there’s no degree symbol (as in 30°C) on my keyboard. On a Windows computer I can create it using the numeric keypad on the right of some keyboards, holding down the ALT key while typing the sequence 0176. While I’m typing the numbers nothing appears but once I complete the sequence and release the ALT key the ° appears on the screen.

English language keyboard image by john forcier from Pixabay, showing the numeric keypad highlighted in yellow with the two Alt keys and the 'num lock' key highlighted in pink. Num lock ('numeric lock') needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional 'alt code' characters.
English language keyboard image by john forcier from Pixabay highlighted by CS4FN, showing the numeric keypad highlighted in yellow with the two Alt keys and the ‘num lock’ key highlighted in pink. Num lock (‘numeric lock’) needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional ‘alt code‘ characters.

When Japanese speakers type they use the main ‘ABC’ letters on the keyboard, but the principle is the same – a combination of keys produces a sequence of letters that the IME converts to the correct character. Or perhaps they could use Google Japan’s April Fool solution from 2010, which surrounded the user in half a dozen massive keyboards with hundreds of keys a little like sitting on a massive drum kit!

*QWERTY is a ‘word’ which starts with a Q that’s not followed by a U of course…

Watch …

More on …

The ‘retrieval system’ of typing mentioned above, which lets the user get to the word or characters more quickly, is similar to the general problem solving strategy called ‘Divide and Conquer’. You can read more about that and other search algorithms in our free booklet ‘Searching to Speak‘ (PDF) which explores how the design of an algorithm could allow someone with locked-in syndrome to communicate. Locked-in syndrome is a condition resulting from a stroke where a person is totally paralysed. They can see, hear and think but cannot speak. How could a person with Locked-in syndrome write a book? How might they do it if they knew some computational thinking?


EPSRC supports this blog through research grant EP/W033615/1.

Is ChatGPT’s “CS4FN” article good enough?

(Or how to write for CS4FN)

A robot emerging from a laptop screen
ChatGPT image AI generated by Alexandra_Koch from Pixabay

Follow the news and it is clear that the chatbots are about to take over journalism, novel writing, script writing, writing research papers, … just about all kinds of writing. So how about writing for the CS4FN magazine. Are they good enough yet? Are we about to lose our jobs? Jo asked ChatGPT to write a CS4FN article to find out. Read its efforts before reading on…

As editor I not only wrote but also vet articles and tweak them when necessary to fit the magazine style. So I’ve looked at ChatGPT’s offering as I would one coming from a person …

ChatGPT’s essay writing has been compared to that of a good but not brilliant student. Writing CS4FN articles is a task we have set students in the past: in part to give them experience over how you must write in different styles for different purposes. Different audience? Different writing. Only a small number come close to what I am after. They generally have one or more issues. A common problem when students write for CS4FN is sadly a lack of good grammar and punctuation throughout beyond just typos (basic but vital English skills seem to be severely lacking these days even with spell checking and grammar checking tools to help). Other common problems include a lack of structure, no hook at the start, over-formal writing so the wrong style, no real fun element at all and/or being devoid of stories about people, an obsession with a few subjects (like machine learning!) rather than finding something new to write about. They are also then often vanilla articles about that topic, just churning out looked-up facts rather than finding some new, interesting angle.

How did the chatbot do? It seems to have made most of the same mistakes. At least, chatGPT’s spelling and grammar are basically good so that is a start: it is a good primary school student then! Beyond that it has behaved like the weaker students do… and missed the point. It has actually just written a pretty bog standard factual article explaining the topic it chose, and of course given a free choice, it chose … Machine Learning! Fine, if it had a novel twist, but there are no interesting angles added to the topic to bring it alive. Nor did it describe the contributions of a person. In fact, no people are mentioned at all. It is also using a pretty formal style of writing (“In conclusion…”). Just like humans (especially academics) it also used too much jargon and didn’t even explain all the jargon it did use (even after being prompted to write for a younger audience). If I was editing I’d get rid of the formality and unexplained jargon for starters. Just like the students who can actually write but don’t yet get the subtleties, it hasn’t got the fact that it should have adapted its style, even when prompted.

It knows about structure and can construct an essay with a start, a middle and end as it has put in an introduction and a conclusion. What it hasn’t done though is add any kind of “grab”. There is nothing at the start to really capture the attention. There is no strange link, no intriguing question, no surprising statement, no interesting person…nothing to really grab you (though Jo saved it by adding to the start, the grab that she had asked an AI to write it). It hasn’t added any twist at the end, or included anything surprising. In fact, there is no fun element at all. Our articles can be serious rather than fun but then the grab has to be about the seriousness: linked to bad effects for society, for example.

ChatGPT has also written a very abstract essay. There is little in the way of context or concrete examples. It says, for example, “rules … couldn’t handle complex situations”. Give me an example of a complex situation so I know what you are talking about! There are no similes or metaphors to help explain. It throws in some application areas for context like game-playing and healthcare but doesn’t at all explain them (it doesn’t say what kind of breakthrough has been made to game playing, for example). In fact, it doesn’t seem to be writing in a “semantic wave” style that makes for good explanations at all. That is where you explain something by linking an abstract technical thing you are explaining, to some everyday context or concrete example, unpacking then repacking the concepts. Explaining machine learning? Then illustrate your points with an example such as how machine learning might use movies to predict your voting habits perhaps…and explain how the example does illustrate the abstract concepts such as pointing out the patterns it might spot.

There are several different kinds of CS4FN article. Overall, CS4FN is about public engagement with research. That gives us ways in to explain core computer science though (like what machine learning is). We try to make sure the reader learns something core, if by stealth, in the middle of longer articles. We also write about people and especially diversity, sometimes about careers or popular culture, or about the history of computation. So, context is central to our articles. Sometimes we write about general topics but always with some interesting link, or game or puzzle or … something. For a really, really good article that I instantly love, I am looking for some real creativity – something very different, whether that is an intriguing link, a new topic, or just a not very well known and surprising fact. ChatGPT did not do any of that at all.

Was ChatGPT’s article good enough? No. At best I might use some of what it wrote in the middle of some other article but in that case I would be doing all the work to make it a CS4FN article.

ChatGPT hasn’t written a CS4FN article
in any sense other than in writing about computing.

Was it trained on material from CS4FN to allow it to pick up what CS4FN was? We originally assumed so – our material has been freely accessible on the web for 20 years and the web is supposedly the chatbots’ training ground. If so I would have expected it to do much better at getting the style right (though if it has used our material it should have credited us!). I’m left thinking that actually when it is asked to write articles or essays without more guidance it understands, it just always writes about machine learning! (Just like I always used to write science fiction stories for every story my English teacher set, to his exasperation!) We assumed, because it wrote about a computing topic, that it did understand, but perhaps, it is all a chimera. Perhaps it didn’t actually understand the brief even to the level of knowing it was being asked to write about computing and just hit lucky. Who knows? It is a black box. We could investigate more, but this is a simple example of why we need Artificial Intelligences that can justify their decisions!

Of course we could work harder to train it up as I would a human member of our team. With more of the right prompting we could perhaps get it there. Also given time the chatbots will get far better, anyway. Even without that they clearly can now do good basic factual writing so, yes, lots of writing jobs are undoubtedly now at risk (and that includes a wide range of jobs, like lawyers, teachers, and even programmers and the like too) if we as a society decide to let them. We may find the world turns much more vanilla as a result though with writing turning much more bland and boring without the human spark and without us noticing till it is lost (just like modern supermarket tomatoes so often taste bland having lost the intense taste they once had!) … unless the chatbots gain some real creativity.

The basic problem of new technology is it reaps changes irrespective of the human cost (when we allow it to, but we so often do, giddy with the new toys). That is fine if as a society we have strong ways to support those affected. That might involve major support for retraining and education into new jobs created. Alternatively, if fewer jobs are created than destroyed, which is the way we may be going, where jobs become ever scarcer, then we need strong social support systems and no stigma to not having a job. However, currently that is not looking likely and instead changes of recent times have just increased, not reduced inequality, with small numbers getting very, very rich but many others getting far poorer as the jobs left pay less and less.

Perhaps it’s not malevolent Artificial Intelligences of science fiction taking over that is the real threat to humanity. Corporations act like living entities these days, working to ensure their own survival whatever the cost, and we largely let them. Perhaps it is the tech companies and their brand of alien self-serving corporation as ‘intelligent life’ acting as societal disrupters that we need to worry about. Things happen (like technology releases) because the corporation wants them to but at the moment that isn’t always the same as what is best for people long term. We could be heading for a wonderful utopian world where people do not need to work and instead spend their time doing fulfilling things. It increasingly looks like instead we have a very dystopian future to look forward to – if we let the Artificial Intelligences do too many things, taking over jobs, just because they can so that corporations can do things more cheaply, so make more fabulous wealth for the few.

Am I about to lose my job writing articles for CS4FN? I don’t think so. Why do I write CS4FN? I love writing this kind of stuff. It is my hobby as much as anything. So I do it for my own personal pleasure as well as for the good I hope it does whether inspiring and educating people, or just throwing up things to think about. Even if the chatBots were good enough, I wouldn’t stop writing. It is great to have a hobby that may also be useful to others. And why would I stop doing something I do for fun, just because a machine could do it for me? But that is just lucky for me. Others who do it for a living won’t be so lucky.

We really have to stop and think about what we want as humans. Why do we do creative things? Why do we work? Why do we do anything? Replacing us with machines is all well and good, but only if the future for all people is actually better as a result, not just a few.


EPSRC supports this blog through research grant EP/W033615/1.

Understanding Parties

by Paul Curzon, Queen Mary University of London

(First appeared in Issue 23 of the CS4FN magazine “The women are (still) here”)

The stereotype of a computer scientist is someone who doesn’t understand people. For many, how people behave is exactly what they are experts in. Kavin Narasimhan is one. When a student at QMUL she studied how people move and form groups at parties, creating realistic computer models of what is going on.

We humans are very good at subtle behaviour, and do much of it without even realising it. One example is the way we stand when we form small groups to talk. We naturally adjust our positions and the way we face each other so we can see and hear clearly, while not making others feel uncomfortable by getting too close. The positions we take as we stand to talk are fairly universal. If we understand what is going on we can create computational models that behave the same way. Most previous models simulated the way we adjust positions as others arrive or leave by assuming everyone tries to both face, and keep the same distance from, the midpoint of the group. However, there is no evidence that that is what we actually do. There are several alternatives. Rather than pointing ourselves at some invisible centre point, we could be subconsciously maximising our view of the people around. We could be adjusting our positions and the direction we face based on the position only of the people next to us, or instead based on the positions of everyone in the group.

Kavin videoed real parties where lots of people formed small groups to find out more of the precise detail of how we position and reposition ourselves. This gave her a bird’s eye view of the positions people actually took. She also created simulations with virtual 2D characters that move around, forming groups then moving on to join other groups. This allowed her to try out different rules of how the characters behaved, and compare them to the real party situations.

She found that her alternate rules were more realistic than rules based on facing a central point. For example, the latter generates regular shapes like triangular and square formations, but the positions real humans take are less regular. They are better modelled by assuming people focus on getting the best view of others. The simulations showed that this was also a more accurate way to predict the sizes of groups that formed, how long they formed for, and how they were spread across the room. Kavin’s rules therefore appear to give a realistic way to describe how we form groups.

Being able to create models like this has all sorts of applications. It is useful for controlling the precise movement of avatars, whether in virtual worlds or teleconferencing. They can be used to control how computer-generated (CGI) characters in films behave, without needing to copy the movements from actors first. It can make the characters in computer games more realistic as they react to whatever movements the real people, and each other, make. In the future we are likely to interact more and more with robots in everyday life, and it will be important that they follow appropriate rules too, so as not to seem alien.

So you shouldn’t assume computer scientists don’t understand people. Many understand them far better than the average person. That is how they are able to create avatars, robots and CGI characters that behave exactly like real people. Virtual parties are set to be that little bit more realistic.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

A machine wrote this post – OpenAi’s ChatGPT

Robot emerging from a laptop screen
ChatGPT image AI Generated by Alexandra_Koch from Pixabay

Blog post by OpenAI’s ChatGPT. Introduction by Jo Brodie

1. Jo wrote this…

ChatGPT is an AI chatbot which can have a conversation with you using everyday (natural) language. It has access to the huge amounts of digital information found on the internet and its ability to use language to write good answers in response to questions has been tested and improved by giving it feedback. It learns, and refines its output.

You don’t need to programme it, you can just ask it a question (giving it a prompt) and, after a moment’s ‘thinking’ it will produce an answer. Lots of people have been testing ChatGPT by asking it questions that we already know the answer to, to see how well it can do. It’s actually quite impressive! If you think it’s made a mistake or misunderstood your prompt you can reword your question or give it more information to help it improve its answer.

Can ChatGPT write an article that’s suitable for our magazine or website?

I gave ChatGPT the following prompt: “Please write a 200 to 500 word article suitable for the CS4FN magazine” and moments later it generated a completely new article about machine learning. That told me it knew (or at least could find out) that CS4FN has something to do with computer science. I’m not surprised that it picked machine learning as a topic given that it is a machine that can learn. I didn’t tell it what to write about, it came up with that idea itself.

The article it gave me contained some technical words and not all of them were explained, so I asked it to rewrite the article for a younger audience (who might not have come across some of the words yet).

The result is below – what do you think?

2. ChatGPT wrote this…

From Logic to Learning: The Incredible Advancements of Artificial Intelligence

Artificial Intelligence (AI) has come a long way since it was first invented in the 1950s. At the beginning, AI was based on logic and rules. Experts in a specific field would give the computer a set of instructions to help it make decisions. But, these rules were limited and couldn’t handle complex situations.

Today, AI has seen a huge shift towards machine learning. This means that instead of following rules, AI learns from data. This shift happened because computers got stronger, and more data became available. The breakthroughs in machine learning algorithms helped too.

Machine learning has two categories: supervised learning and unsupervised learning. In supervised learning, the computer learns from labeled data. This means that the correct answer is given to the computer for each input. Once the computer has learned enough, it can predict the answer for new data. Supervised learning is useful for applications like image and speech recognition.

In unsupervised learning, the computer learns from unlabeled data. This means that there is no correct answer provided. The goal of unsupervised learning is to find patterns in the data. This type of learning is useful in applications like anomaly detection and clustering.

Deep learning is a special type of machine learning that uses neural networks. Neural networks are inspired by the human brain and have layers of connected nodes. Each node applies a mathematical function to its inputs and sends the result to the next layer of nodes. Deep learning has revolutionized AI, leading to breakthroughs in areas like natural language processing, computer vision, and game playing.

Even though machine learning has achieved so much, there are still challenges to overcome. One challenge is the lack of transparency in machine learning models. Unlike rule-based systems, it can be hard to understand how a machine learning model made a decision. This can be a problem, especially in areas like healthcare where the decision-making process needs to be trustworthy and explainable.

Another challenge is the need for a lot of data to train machine learning models. While data is easy to find in some areas, like social media, it can be hard to come by in other areas like medical imaging. This lack of data can make machine learning less effective in some applications.

In conclusion, AI has come a long way since the 1950s. We’ve gone from rule-based systems to machine learning, which has led to breakthroughs in many areas. Although there are still challenges to overcome, the future of AI is exciting!

The first attempt is below…

The Evolution of Artificial Intelligence: From Rule-Based to Machine Learning

Artificial Intelligence (AI) has come a long way since its inception in the 1950s. Initially, AI was developed based on rule-based systems, where experts in a specific domain would provide rules to the system to help it make decisions. While these systems worked well in some areas, they were limited in their ability to handle complex problems and adapt to new situations.

In recent years, AI has seen a significant shift towards machine learning, where systems learn from data rather than relying on pre-defined rules. This shift has been driven by advances in computer hardware, the availability of large amounts of data, and breakthroughs in machine learning algorithms.

Machine learning has two main categories: supervised learning and unsupervised learning. In supervised learning, the system is trained on labeled data, where the correct answer is provided for each input. The system uses this labeled data to learn a function that maps inputs to outputs. Once trained, the system can make predictions on new, unseen data. This approach has been successful in a wide range of applications, such as image and speech recognition.

In unsupervised learning, the system is trained on unlabeled data, where no correct answer is provided. The goal of unsupervised learning is to discover patterns and structure in the data. This approach has been useful in areas such as anomaly detection and clustering.

Deep learning is a subset of machine learning that uses neural networks to learn from data. Neural networks are inspired by the structure of the human brain and consist of layers of interconnected nodes. Each node applies a mathematical function to its inputs and passes the result to the next layer of nodes. Deep learning has revolutionized the field of AI, enabling breakthroughs in areas such as natural language processing, computer vision, and game playing.

Despite the many successes of machine learning, there are still limitations and challenges to overcome. One challenge is the lack of interpretability of machine learning models. Unlike rule-based systems, it can be difficult to understand how a machine learning model arrived at a particular decision. This lack of transparency can be problematic in areas such as healthcare, where the decision-making process needs to be explainable and trustworthy.

Another challenge is the need for large amounts of data to train machine learning models. While data is abundant in some domains, such as social media, it can be scarce in others, such as medical imaging. This scarcity of data can limit the effectiveness of machine learning approaches in certain applications.

In conclusion, the evolution of AI from rule-based systems to machine learning has enabled breakthroughs in many areas and holds tremendous potential for the future. However, there are still challenges to overcome, particularly in the areas of interpretability and data scarcity. Addressing these challenges will be critical to realizing the full potential of AI in the years to come.

More on …


EPSRC supports this blog through research grant EP/W033615/1.

What the real Pros say

by Paul Curzon, Queen Mary University of London

Originally Published in the CS4FN “Women are Here” special

Rebecca Stewart CC BY 2.0 Thomas Bonte
Rebecca Stewart” by Thomas BonteCC BY 2.0 via Flikr (cropped)

Some (female) computer scientists and electronic engineers were asked what they most liked about their job and the subject. Each quote is given with the job they had at the time of the quote. many have moved on or upwards since.

Here is what the real Pros think …

Building software that protects billions of people from online abuse … I find it tremendously rewarding…Every code change I make is a puzzle: exciting to solve and exhilarating to crack; I love doing this all day, every day.

Despoina Magka, Software engineer, Facebook

Taking on new challenges and overcoming my limitations with every program I write, every bug I fix, and every application I create. It has and continues to inspire me to grow, both professionally and personally.

Kavin Narasimhan, Researcher, University of Surrey

Because computer science skills are useful in nearly every part of our lives, I get to work with biologists, mathematicians, artists, designers, educators and lately a whole colony of naked mole-rats! I love the diversity.

Julie Freeman, artist and PhD student, QMUL

The flexibility of working from any place at any time. It offers many opportunities to collaborate with, and learn from, brilliant people from all over the world.

Greta Yorsh, Lecturer QMUL, former software engineer, ARM.

Possibilities! When you try to do something that seems crazy or impossible and it works, it opens up new possibilities… I enjoy being surrounded by creative people.

Justyna Petke, Researcher, UCL

That we get to study the deep characteristics of the human mind and yet we are so close to advances in technology and get to use them in our research.” – Mehrnoosh Sadrzadeh, Senior Lecturer, QMUL

I get the opportunity to understand what both business people and technologists are thinking about, their ideas and their priorities and I have the opportunity to bring these ideas to fruition. I feel very special being able to do this! I also like that it is a creative subject – elegant coding can be so beautiful!

Jill Hamilton, Vice President, Morgan Stanley

You never know what research area the solution to your problem will come from, so every conversation is valuable.

Vanessa Pope, PhD student, QMUL

I get to ask questions about people, and set about answering them in an empirical way. computer science can lead you in a variety of unexpected directions

Shauna Concannon, Researcher, QMUL

It is fascinating to be able to provide simpler solutions to challenging requirements faced by the business.

Emanuela Lins, Vice President, Morgan Stanley

I think the best thing is how you can apply it to so many different topics. If you are interested in biology, music, literature, sport or just about anything else you can think of, then there’s a problem that you can tackle using computer science or electronic engineering…I like writing code, but I enjoy making things even more.

Becky Stewart, Lecturer, QMUL

… you get to be both a thinker and a creator. You get to think logically and mathematically, be creative in the way you write and design systems and you can be artistic in the way you display things to users. …you’re always learning something new.

Yasaman Sepanj, Associate, Morgan Stanley

Creating the initial ideas, forming the game, making the story… Being part of the creative process and having a hands on approach“,

Nana Louise Nielsen, Senior Game Designer, Sumo Digital

Working with customers to solve their problems. The best feeling in the world is when you leave … knowing you’ve just made a huge difference.

Hannah Parker, IT Consultant, IBM

It changes so often… I am not always sure what the day will be like

Madleina Scheidegger, Software Engineer, Google.

I enjoy being able to work from home

Megan Beynon, Software Engineer, IBM

I love to see our plans come together with another service going live and the first positive user feedback coming in

Kerstin Kleese van Dam, Head of Data Management, CCLRC

…a good experienced team around me focused on delivering results

Anita King, Senior Project Manager, Metropolitan Police Service

I get to work with literally every single department in the organisation.

Jemima Rellie, Head of Digital Programme, Tate

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Susan Kare: Icon Draw

by Jo Brodie, Queen Mary University of London

(from the archive)

A pixel drawing of a dustbin icon
Image by CS4FN after Susan Kare

Pick up any computer or smart gadget and you’ll find small, colourful pictures on the screen. These ‘icons’ tell you which app is which. You can just touch them or click on them to open the app. It’s quick and easy, but it wasn’t always like that.

Up until the 1980s if you wanted to run a program you had to type a written command to tell the device what to do. This made things slow and hard. You had to remember all the different commands to type. It meant that only people who felt quite confident with computers were able to play with them.

Computer scientists wanted everyone to be able to join in (they wanted to sell more computers too!) so they developed a visual, picture-based way of letting people tell their computers what to do, instead of typing in commands. It’s called a ‘Graphical User Interface’ or GUI.

An artist, Susan Kare, was asked to design some very simple pictures – icons – that would make using computers easier. If people wanted to delete a file they would click on an icon with her drawing of a little dustbin. If people wanted to edit a letter they were writing they could click on the icon showing a pair of scissors to cut out a bit of text. She originally designed them on squared paper, with each square representing a pixel on the screen. Over the years the pictures have become more sophisticated (and sometimes more confusing) but in the early days they were both simple and clear thanks to Susan’s skill.

Try our pixel puzzles which use the same idea. Then invent your own icons or pixel puzzles. Can you come up with your own easily recognizable pictures using as few lines as possible?

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Celebrating Jean Bartik: 1940s programmer

Two of the ENIAC programmers, are preparing the computer for Demonstration Day in February 1946. “U.S. Army Photo” from the archives of the ARL Technical Library. Left: Betty Jennings (later Bartik), right: Frances Bilas (Spence) – Image via Wikipedia. Public Domain.

by Jo Brodie, Queen Mary University of London.

Jean Bartik (born Betty Jean Jennings) was one of six women who programmed “ENIAC” (the Electronic Numerical Integrator and Computer), one of the earliest electronic programmable computers. The work she and her colleagues did in the 1940s had a huge impact on computer science however their contribution went largely unrecognised for 40 years. 

Jean Bartik – born 27 December 1924; died on this day, 23 March 2011

Born in Missouri USA in December 1924 to a family of teachers in Betty (as she was then known) showed promise in Mathematics, graduating from her high school in the summer of 1941 aged 16 with the highest marks in maths ever seen at her school. She began her degree in Maths and English at her local teachers’ college (which is now Northwest Missouri State University) but everything changed dramatically a few months in when the US became involved in the Second World War. The men (teachers and students) were called up for war service leaving a dwindling department and her studies were paused, resuming only in 1943 when retired professors were brought in to teach; she graduated in January 1945, the only person in her year to graduate in Maths.

Although her family encouraged her to become a local maths teacher she decided to seek more distant adventures. The University of Pennsylvania in Philadelphia (~1,000 miles away) had put out a call for people with maths skills to help with the war effort, she applied and was accepted. Along with over 80 other women she was employed to calculate, using advanced maths including differential calculus equations, accurate trajectories of bullets and bombs (ballistics) for the military. She and her colleagues were ‘human computers’ (people who did calculations before the word meant what it does today) creating range tables, columns of information that told the US army where they should point their guns to be sure of hitting their targets. This was complex work that had to take account of weather conditions as well as more obvious things like distance and size of the gun barrel.

Even with 80-100 women working on every possible combination of gun size and angle it still took over a week to generate one data table so the US Army was obviously keen to speed things up as much as possible. They had previously given funding in 1943 to John Mauchly (a physicist) and John Presper Eckert (an electrical engineer) to build a programmable electronic calculator – ENIAC – which would automate the calculations and give them a huge speed advantage. By 1945 the enormous new machine, which took up a room (as computers tended to do in those days) consisted of several thousand vacuum tubes, weighed 30 tonnes and was held together with several million soldered joints. It would be programmed with punched cards with holes punched at different positions in each card allowing a current to pass (or not pass, if no hole present) through a particular set of cables connected through a plugboard (like old-fashioned telephone exchanges). 

From the now 100 women working as human computers in the department six were selected to become the machine’s operators – a role that was exceptional. There were no manuals available and ‘programming’, as we know it today, didn’t yet exist – it was much more physical. Not only did the ‘ENIAC six’ have to correctly wire each cable they had to fully understand the machine’s underlying blueprints and electronic circuits to make it work as expected. Repairs could involve crawling into the machine to fix a broken wire or vacuum tube. 

World War 2 actually ended in September 1945 before ENIAC was brought into full service, but being programmable (which meant rewiring the cables) it would soon be put to other uses. Jean really enjoyed her time working on ENIAC and said later that she’d “never since been in as exciting an environment. We knew we were pushing back frontiers” but she was working at a time when men’s jobs and achievements were given more credit than women’s.

In February 1946 ENIAC was unveiled to the press with its (male) inventors demonstrating its impressive calculating speeds and how much time could be saved compared with people performing the calculations with mechanical desk calculators. While Jean and some of the other women were in attendance (and appear in press photographs of the time) the women were not introduced, their work wasn’t celebrated, they were not always correctly identified in the photographs and were even not invited to the celebratory dinner after the event – as Jean said in a later interview (see the second video (YouTube) below) “We were sort of horrified!”.

In December 1946 she married William Bartik (an engineer) and over the next few years was instrumental in the programming and development of other early computers. She also taught others how to program them (an early computer science teacher!). She often worked with her husband too, following him to different cities for work. However her husband took on a new role in 1951 and the company’s policy was that wives were not allowed to work in the same place. Frustrated, Jean left computing for a while and also took a career break to raise her family. 

In the late 1960s she returned to the field of computer science and for several years she blended her background in Maths and English, writing technical reports on the newer ‘minicomputers’ (still quite large compared to modern computers but you could fit more of them in a room). However the company she worked for was sold off and she was made redundant in 1985 at the age of 60. She couldn’t find another job in the industry which she put down to age discrimination and she spent her remaining career working in real estate (selling property or land). She died, aged 86 on 23 March 2011. 

Jean’s contribution to computer science remained largely unknown to the wider world until 1986 when Kathy Kleinman (an author, law professor and programmer) decided to find out who the women in these photographs were and rediscovered the pioneering work of the ENIAC six.

The ENIAC six women were Kathleen McNulty Mauchly Antonelli, Jean Jennings Bartik, Frances (Betty) Snyder Holberton, Marlyn Wescoff Meltzer, Frances Bilas Spence, and Ruth Lichterman Teitelbaum.

Watch…

More on …



EPSRC supports this blog through research grant EP/W033615/1.

What’s that bird? Ask your phone – birdsong-recognition apps

by Dan Stowell, Queen Mary University of London

Could your smartphone automatically tell you what species of bird is singing outside your window? If so how?

Mobile phones contain microphones to pick up your voice. That means they should be able to pick up the sound of birds singing too, right? And maybe even decide which bird is which?

Smartphone apps exist that promise to do just this. They record a sound, analyse it, and tell you which species of bird they think it is most likely to be. But a smartphone doesn’t have the sophisticated brain that we have, evolved over millions of years to understand the world around us. A smartphone has to be programmed by someone to do everything it does. So if you had to program an app to recognise bird sounds, how would you do it? There are two very different ways computer scientists have devised to do this kind of decision making and they are used by researchers for all sorts of applications from diagnosing medical problems to recognising suspicious behaviour in CCTV images. Both ways are used by phone apps to recognise bird song that you can already buy.

The sound of the European robin (Erithacus rubecula) better known as robin redbreast, Recorded by
Vladimir Yu. Arkhipov, Arkhivov CC BY-SA 3.0 via wikimedia

Write down all the rules

Blackbird singing
Blackbird Image by Ian Lindsay from Pixabay

If you ask a birdwatcher how to identify a blackbird’s sound, they will tell you specific rules. “It’s high-pitched, not low-pitched.” “It lasts a few seconds and then there’s a silent gap before it does it again.” “It’s twittery and complex, not just a single note.” So if we wrote down all those rules in a recipe for the machine to follow, each rule a little program that could say “Yes, I’m true for that sound”, an app combining them could decide when a sound matches all the rules and when it doesn’t.

This is called an ‘expert system’ approach. One difficulty is that it can take a lot of time and effort to actually write down enough rules for enough birds: there are hundreds of bird species in the UK alone! Each would need lots of rules to be hand crafted. It also needs lots of input from bird experts to get the rules exactly right. Even then it’s not always possible for people to put into words what makes a sound special. Could you write down exactly what makes you recognise your friends’ voices, and what makes them different from everyone else’s? Probably not! However, this approach can be good because you know exactly what reasons the computer is using when it makes decisions.

The sound of a European blackbird (Turdus merula) singing merrily in Finland, from Wikipedia (song 1). Public Domain via wikimedia

This is very different from the other approach which is…

Show it lots of examples

A lot of modern systems use the idea of ‘machine learning’, which means that instead of writing rules down, we create a system that can somehow ‘learn’ what the correct answer should be. We just give it lots of different examples to learn from, telling it what each one is. Once it has seen enough examples to get it right often enough, we let it loose on things we don’t know in advance. This approach is inspired by how the brain works. We know that brains are good at learning, so why not do what they do!

One difficulty with this is that you can’t always be sure how the machine comes up with its decisions. Often the software is a ‘black box’ that gives you an answer but doesn’t tell you what justifies that answer. Is it really listening to the same aspects of the sound as we do? How would we know?

On the other hand, perhaps that’s the great thing about this approach: a computer might be able to give you the right answer without you having to tell it exactly how to do that!

It means we don’t need to write down a ‘recipe’ for every sound we want to detect. If it can learn from examples, and get the answer right when it hears new examples, isn’t that all we need?

Which way is best?

There are hundreds of bird species that you might hear in the UK alone, and many more in tropical countries. Human experts take many years to learn which sound means which bird. It’s a difficult thing to do!

So which approach should your smartphone use if you want it to help identify birds around you? You can find phone apps that use one approach or another. It’s very hard to measure exactly which approach is best, because the conditions change so much. Which one works best when there’s noisy traffic in the background? Which one works best when lots of birds sing together? Which one works best if the bird is singing in a different ‘dialect’ from the examples we used when we created the system?

One way to answer the question is to provide phone apps to people and to see which apps they find most useful. So companies and researchers are creating apps using the ways they hope will work best. The market may well then make the decision. How would you decide?


This article was originally published on the CS4FN website and can also be found on pages 10 and 11 of Issue 21 of the CS4FN magazine ‘Computing sounds wild’. You can download a free PDF copy of the magazine (below), or any of our other free material at our downloads site.


More on …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.