Film Futures: A Christmas Carol

The Ghost of Christmas Present surrounded by food, with Scrooge looking on in night clothes.
John Leech, Public domain, via Wikimedia Commons

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Science or Computer Scientists. Here we look at an alternative version of the Charles Dickens’ A Christmas Carol (take your pick of which version…my favourites are The Muppet Christmas Carol, but also if we include Theatre, the one man version of Patrick Stewart, in the 1990s and London in 2005 where he plays all 40 or so parts on a bare stage).

**** SPOILER ALERT ****

Ebenezer Scrooge runs a massively successful Artificial Intelligence company called Scrooge and Marley. Their main product is SAM, an AI agent which is close to General AI in capability. The company sells it to the world with both business versions and personal ones. The latter acts as everyone’s friend, confidant, personal trainer, tutor and mentor, and more. It hears everything they hear and say, and sees everything they see. As a result Scrooge is now a Trillionnaire.

Apart from one last employee, Bob Cratchit, everyone in his company has long been replaced by AI agents designed by Scrooge. It is a simple way to boost profits: human employees, after all, are expensive luxuries. First all the clerical staff went, then accounts and Human Resources. The cleaners were replaced by robots that stalk the corridors at night, also acting as security guards, the receptionist is now a robot head. Eventually even the software engineers were replaced by software agents that now beaver away at the code, constantly upgrading, SAM, following SAM’s instructions. Bob Cratchit, maintains both Scrooge’s personal and company IT systems, there for when some human intervention is needed, though that now actually means doing very little but monitoring everything…long hours staring at a screen. He is paid virtually nothing as a result, as he has had his pay repeatedly cut as his duties were replaced. He has had no option to accept the cuts as jobs are scarce and he has a disabled child, Tiny Tim, to support. He is constantly told by Scrooge that he will soon be completely replaced by an agent though, and lives in fear of that day.

On Christmas Eve Scrooge rejects his nephew, Fred’s invitation to visit for Christmas dinner. Instead Scrooge returns, in his self-driving car, to his smart home within his compound on a cliff top overlooking the sea. He lives there alone, given his servants were dismissed long ago. As he arrives, he is shocked to see a vision of his late partner, Jacob Marley, dead for 7 years, in the lens of his smart door cam. The door opens automatically on sensing his arrival, and the vision disappears as he rushes past. He brushes it off as tiredness. Perhaps he is coming down with something. He eats an AI chef designed ready meal made by his smart fridge with integrated microwave. It knew he was arriving so had it ready for him as he entered the kitchen. The house also dispenses him drugs to protect against the possible nascent illness. His house is dark and silent and he is alone, but he likes it that way. He retires to his bedroom, his giant 4-poster bed surrounded by plate glass sides that automatically darken as he climbs in to bed and he quickly falls asleep.

Suddenly, he is woken by a strange clanking. The ghost of Jacob Marley appears and warns him that his race to become a trillionaire has left him with everlasting chains that he will drag to eternity, just as Marley must do. He is warned that he will be visited by three ghosts of past, present and future and he should heed their warnings! There is still time to cast off his chains before it is too late.

The ghost of Christmas Past arrives first and takes him back to his childhood. He sees himself growing up, a loner at boarding school, spending all his time coding, on his laptop, making no friends and wanting none. But, then they move forward in time to his first job as an apprentice software engineer where he meets Belle. For the first time in his life he falls in love and becomes a new person. He starts to love life. She is the joy of his whole existence. He still works hard but he also spends lots of time with Belle. Eventually they become engaged, but soon he is working on making his first million. Gradually, he spends more and more time at work and less time with Belle, as if he doesn’t he will end up behind the curve. He skips social events working late on software upgrades, leaving Belle to go to the theatre, to parties, to dances alone. He sees her less and less as he just doesn’t have the time if he is to make his company successful. He has no time for anything but work. He makes his first fortune running an online betting company, and becomes hardened to the problems of others. He can’t care about the people whose homes are broken up through gambling addiction caused by his site. He has to turn a blind eye to the people he left destitute all because they were drawn in by his company’s use of intentionally addictive computer algorithms. The debt collectors deal with them. It is not his problem that his users are driven to suicide, as there are always more, who can be persuaded to start gambling younger and younger – it is their choice after all. He makes his million and uses the money to invest in a start up AI company that with business partner, Jacob Marley, they take control of, sacking the original founders. Now he is chasing his first billion.

Eventually, Belle realises he has become a stranger to her. Worse, he does not care about the cost of the things he does to others. All the kindness that had blossomed when he first met her has gone. He clearly loves the pursuit of money and personal success far more than he loves her, Winning the race to market is all that matters. Her heart broken, another casualty of his quest for success, Belle releases him from their engagement.

Later, the ghost of Christmas Present arrives and shows Scrooge Christmas as it is now. They see lots of examples of people enjoying life, whatever their circumstances because of the way they value each other, not because they value money or abstract success. Scrooge is shown how Christmas brings joy to all who let the spirit of Christmas enter their hearts. It pulls people together, making them happy, enjoying each other’s company. However, Scrooge also sees how he is perceived by those who know him: a sad monster who cares only for himself and not at all for others, with his own life the worse for it, despite his fabulous wealth. He is shown too how his nephew Fred refuses to give up on him and says he will invite him to join their Christmas every year even if he knows the invitation will always be turned down.

The ghost of Christmas Future arrives next and shows him the future of Bob Cratchit’s family. With little income to look after him, the disabled Tiny Tim dies. Scrooge is also shown his own grave and the aftermath of his lonely death, when he is mocked, even by his own robot agents. On his death, a hacker group takes them over to steal his fortune. Scrooge asks whether this future is the future that will be, or a future that may be only. Assured that he can still change his future, he wakes on Christmas morning.

Staring out the window at the snow falling on Christmas morning, he immediately instructs his AI agent, SAM, to buy the leading cryogenics firm. It freezes rich people when they die, putting them on ice so that one day, once the science is perfected, they can be brought back to life. He instructs other AI agents to research and perfect the science of resurrection. However, he also boosts his cyber security and sacks Cratchit, as clearly he is a security weakness, Scrooge has no evidence, but he strongly suspects the shenanigans in the night must have been Cratchit’s doing, somehow controlling the holographic displays of his smart house, perhaps, or adding hallucinogenics to his food.

Satisfied he gets on with his life as before, building his company, building his wealth.

However, the following year on Christmas Eve he is in a freak accident. His smart car is barrelled into by a self-driving lorry that runs a red light. His AI agents take over immediately and he is cryogenically frozen, the frozen body moved back to his smart home under the control of SAM.

Many decades pass. Then one day his AI agents resurrect him. They have been working on his behalf, perfecting the science of resurrection on the people frozen before him. There are many failures, during which all the company’s former clients, who had paid to be frozen, but who are now just assets of the company, are killed for ever in resurrection experiments. However, SAM finally works out how to resurrect a person successfully. After testing the process on quantum simulations for many years, SAM finally brings Scrooge back to life.

His first thought is for the state of his companies, the state of his wealth .However, he is told that his former money is now worthless. He is told by SAM of the anarchy and the riots of the mid 21st century as people were thrown out of work, replaced by machines, as millions were made homeless, how there were wars over water, over food, and because of environmental destruction made worse by all the conflict. The world economy collapsed completely as a small number of companies amassed all the wealth, but impoverished everyone else, so that there was eventually no one with money to buy their products. Famine and plague followed, sweeping the globe.

However, Scrooge is assured by SAM that it is all ok, because as humanity died out he was protected by his AI agents. They used his money to expand his estate. They bought companies (run by machines) that then worked solely to protect his interests and his personal future. They stockpiled resources, buying automated manufacturing plants along with their whole supply chains, long before money became worthless. They computed the resources he would need, and so did what was needed to secure his future. However, the planet is now dead. Gradually, he realises that he is the last person still known to be alive. Finally, he has his wish: “If they would rather die…they had better do it, and decrease the surplus population.”

Paul Curzon, Queen Mary University of London

The reality

“Everyone is working all the time…Even the folks who are very wealthy now…all they do is work….No one’s taking a holiday. People don’t have time … for the people they love.”

– Guardian. 1 Dec 2025

“The inside story of the race to build the ultimate in Artificial Intelligence”

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Scéalextric Stories

If you watch a lot of movies you’ve probably noticed some recurring patterns in the way that popular cinematic stories are structured. Every hero or heroine needs a goal and a villain to thwart that goal. Every goal requires travel along a path that is probably blocked with frustrating obstacles. Heroes may not see themselves as heroes, and will occasionally take the wrong fork in the path, only to return to the one true way before story’s end. We often speak of this path as if it were a race track: a fast-paced story speeds towards its inevitable conclusion, following surprising “twists” and “turns” along the way. The track often turns out to be a circular one, with the heroine finally returning to the beginning, but with a renewed sense of appreciation and understanding. Perhaps we can use this race track idea as a basis for creating stories.

Building a track

If you’ve ever played with a Scalextric set, you will know that the curviest tracks make for the most dramatic stories, by providing more points at which our racing cars can fly off at a tight bend. In Scalextric you build your own race circuits by clicking together segments of prefabricated track, so the more diverse the set of track parts, the more dramatic your circuit can be. We can think of story generation as a similar kind of process. Imagine if you had a large stock of prefabricated plot segments, each made up of three successive bits of story action. A generator could clip these segments together to create a larger story, by connecting the pieces end-to-end. To keep the plot consistent we would only link up sections if they have overlapping actions. So If D-E-F is a segment comprising the actions D, E, and F, we could create the story B-C-D-E-F-G-H by linking the section B-C-D on to the left of D-E-F and F-G-H on its right.

Use a kit

At University College Dublin (UCD) we have created a set of rich public resources that make it easy for you to build your own automated story generator. We call the bundle of resources Scéalextric, from scéal (the Irish word for story) and Scalextric. You can download the Scéalextric resources from our Github but an even better place to start is our blog for people who want to build creative systems of any kind, called Best Of Bot Worlds.

In Artificial Intelligence we often represent complex knowledge structures as ‘graphs’. These graphs consists of lots of labeled lines (called edges) that show how labeled points (called nodes) are connected. That is what our story pieces essentially are. We have several agreed ways for storing these node-relation-node triples, with acronyms hiding long names, like XML (eXtensible Markup Language), RDF (Resource Description Framework) and OWL (Web Ontology Language), but the simplest and most convenient way to create and maintain a large set of story triples is actually just to use a spreadsheet! Yes, the boring spreadsheet is a great way to store and share knowledge, because every cell lies at the intersection of a row and a column. These three parts give us our triples.

Scéalextric is a collection of easy-to-browse spreadsheets that tell a machine how actions connect to form action sequences (like D-E-F above), how actions causally interconnect to each other (via and, then, but), how actions can be “rendered” in natural idiomatic English, and so on.

Adding Character

Automated storytelling is one of the toughest challenges for a researcher or hobbyist starting out in artificial intelligence, because stories require lots of knowledge about causality and characterization. Why would character A do that to character B, and what is character B likely to do next? It helps if the audience can identify with the characters in some way, so that they can use their pre-existing knowledge to understand why the characters do what they do. Imagine writing a story involving Donald Trump and Lex Luthor as characters: how would these characters interact, and what parts of their personalities would they reveal to us through their actions?

Scéalextric therefore contains a large knowledge-base of 800 famous people. These are the cars that will run on our tracks. The entry for each one has triples describing a character’s gender, fictive status, politics, marital status, activities, weapons, teams, domains, genres, taxonomic categories, good points and bad points, and a lot more besides. A key challenge in good storytelling, whether you are a machine or a human, is integrating character and plot so that one informs the other.

A Twitterbot plot

Let’s look at a story created and tweeted by our Twitterbot @BestOfBotWorlds over a series of 12 tweets. Can you see where the joins are in our Scéalextric track? Can you recognize where character-specific knowledge has been inserted into the rendering of different actions, making the story seem funny and appropriate at the same time? More importantly, can you see how you might connect the track segments differently, choose characters more carefully, or use knowledge about them more appropriately, to make better stories and to build a better story-generator? That’s what Scéalextric is for: to allow you to build your own storytelling system and to explore the path less trodden in the world of computational creativity. It all starts with a click.

An unlikely tale generated by the Twitter storybot.

Tony Veale, University College Dublin


Further reading

Christopher Strachey came up with the first example of a computer program that could create lines of text (from lists of words). The CS4FN developed a game called ‘Program A Postcard’ (see below) for use at festival events.


Related Magazine …

Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

AI owes us an explanation

Question mark and silhouette of hands holding a smart phone with question mark
Image by Chen from Pixabay

Why should AI tools explain why? Erhan Pisirir and Evangelia Kyrimi, researchers ar Queen Mary University of London explain why.

From the moment we start talking, we ask why. A three-year-old may ask fifty “whys” a day. ‘Why should I hold your hand when we cross the road?’ ‘Why do I need to wear my jacket?’ Every time their parent provides a reason, the toddler learns and makes sense of the world a little bit more.

Even when we are no longer toddlers trying to figure out why the spoon falls on the ground and why we should not touch the fire, it is still in our nature to question the reasons. The decisions and the recommendations given to us have millions of “whys” behind them. A bank might reject our loan application. A doctor might urge us to go to hospital for more tests. And every time, our instinct is to ask the same question: Why? We trust advice more when we understand it.

Nowadays the advice and recommendations come not only from other humans but also from computers with artificial intelligence (AI), such as a bank’s computer systems or health apps.  Now that AI systems are giving us advice and making decisions that affect our lives, shouldn’t they also explain themselves?

That’s the promise of Explainable AI: building machines that can explain their decisions or recommendations. These machines must be able to say what is decided, but also why, in a way we can understand.

From trees to neurons

For decades we have been trying to make machines think for us. A machine does not have the thinking, or the reasoning, abilities of humans. So we need to give instructions on how to think. When computers were less capable, these instructions were simpler. For example, it could look like a tree: think of a tree where each branch is a question with several possible answers, and each answer creates a new branch. Do you have a rash? Yes Do you have a temperature? Yes. Do you have nausea? Yes. Are the spots purple? Yes. If you push a glass against them do they fade away? No …  Go to the hospital immediately.

The tree of decisions naturally gives whys connected to the tips of the paths taken: You should go to the hospital because your collection of symptoms: having a rash of purple spots, a temperature and nausea and especially because they do not fade under a glass, mean that it is likely you have Meningitis. Because it is life-threatening and can get worse very quickly, you need to get to a hospital urgently. An expert doctor can check reasoning like this and decide whether that explanation is actually good reasoning about whether someone has Meningitis or not, or more to the point should rush to the hospital.

Humans made computers much more capable of more complex tasks over time. With this, their thinking instructions became more complex too. Nowadays they might look like more complicated networks instead of trees with branches. They might look like a network of neurons in a human brain, for example. These complex systems make computers great at answering more difficult questions successfully. But unlike looking at a tree of decisions, humans cannot understand how the computer reaches its final answer at a glance of its system of thinking anymore. It is no longer the case that following a simple path of branches through a decision tree gives a definite answer, never mind a why. Now there are loops and backtracks, splits and joins, and the decisions depend on weightings of answers not just a definite Yes or No. For example, with Meningitis, according to the NHS website, there are many more symptoms than above and they can appear in any order or not at all. There may not even be a rash, or the rash may fade when pressure is applied. It is complicated and certainly not as simple as our decision tree suggests (the NHS says “Trust your instincts and and do not wait for all the symptoms to appear or until a rash develops. You should get medical help immediately if you’re concerned about yourself or your child.”) Certainly, the situation is NOT simple enough to say from a decision tree, for example, “Do not worry, you do not have Meningitis because your spots are not purple and did fade in the glass test”. An explanation like that could kill someone. The decision has to be made from a complex web of inter-related facts. AI tools require you to just  trust their instincts!

Let us, for a moment, forget about branches and networks, and imagine that AI is a magician’s hat: something goes in (a white handkerchief) and something else at the tap of a wand magically pops out (a white rabbit).  With a loan application, for example, details such as your age, income, or occupation go in, and a decision comes out: approved or rejected.

Inside the magician’s hat

Nowadays researchers are trying to make the magician’s hat transparent so that you can have a sneak peek of what is going on in there (it shouldn’t seem like magic!). Was the rabbit in a secret compartment, did the magician move it from the pocket and put it in at the last minute or did it really appear out of nowhere (real magic)? Was the decision based on your age or income, or was it influenced by something that should be irrelevant like the font choice in your application?

Currently, explainable AI methods can answer different kinds of questions (though, not always effectively):

  • Why: Your loan was approved because you have a regular income record and have always paid back loans in the past.
  • Why not: Your loan application was rejected because you are 20 years old and are still a student,
  • What if: If you earned £1000 or more each month, your loan application would not have been rejected.

Researchers are inventing many different ways to give these explanations: for example, heat maps that highlight the most important pixels in an image, lists of pros and cons that show the factors for and against a decision, visual explanations such as diagrams or highlights, or natural-language explanations that sound more like everyday conversations.

What explanations are good for

The more interactions people have with AI, the more we see why AI explanations are important. 

  • Understanding why AI made a specific recommendation helps people TRUST the system more; for example, doctors (or patients) might want to know why AI flagged a tumour before acting on its advice. 
  • The explanations might expose if AI recommendations have discrimination and bias, increasing FAIRNESS. Think about the loan rejection scenario again, what if the explanation shows that the reason of AI’s decision was your race? Is that fair?
  • The explanations can help researchers and engineers with DEBUGGING, helping them understand and fix problems with AI faster.
  • AI explanations are also becoming more and more required by LAW. The General Data Protection Regulation (GDPR) gives people a “right to explanation” for some automated decisions, especially for high stake areas, such as healthcare and finance. 

The convincing barrister

One thing to keep in mind is that the presence of explanations does not automatically make an AI system perfect. Explanations themselves can be flawed. The biggest catch is when an explanation is convincing when it shouldn’t be. Imagine a barrister with charming social skills who can spin a story and let a clearly guilty client go free from charge. The AI explanations should not aim to be blindly convincing whether the AI is right or wrong. In the cases AI got it all wrong (and from time to time it will), the explanations should make this clear rather than falsely reassuring the human.

The future 

Explainable AI isn’t an entirely new concept. Decades ago, early expert systems in medicine already included “why” buttons to justify their advice. But only in recent years explainable AI has become a major trend, because of AI systems becoming more powerful and with the increase of concerns about AI surpassing human decision-making but potenitally making some bad decisions.

Researchers are now exploring ways to make explanations more interactive and human friendly, similarly to how we can ask questions to ChatGPT like ‘what influenced this decision the most?’ or ‘what would need to change for a different outcome?’ They are trying to tailor the explanation’s content, style and representation to the users’ needs.

So next time AI makes a decision for you, ask yourself: could it tell me why? If not, maybe it still has some explaining to do.

Erhan Pisirir and Evangelia Kyrimi, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Perceptrons and the AI winter

Perceptron over a winter scence of an icy tree
A perceptron winter: Winter image by Image by Nicky ❤️🌿🐞🌿❤️ from Pixabay. Perceptron and all other image by CS4FN.

Back in the 1960s there was an AI winter…after lots of hype about how Artificial Intelligence tools would soon be changing the world, the hype fell short of the reality and the bubble burst, funding disappeared and progress stalled. One of the things that contributed was a simple theoretical result, the apparent shortcomings of a little device called a perceptron. It was the computational equivalent of an artificial brain cell and all the hype had been built on its shoulders. Now, variations of perceptrons are the foundation of neural networks and machine learning tools which are taking over the world…so what went wrong in the 1960s? A much misunderstood mathematical result about what a perceptron can and can’t do was part of the problem!

The idea of a perceptron dates back to the 1940s but Frank Rosenblatt, a researcher at  Cornell Aeronautical Laboratory, first built one in 1958 and so popularised the idea. A perceptron can be thought of as a simple gadget, or as an algorithm for classifying things. The basic idea is it has lots of inputs of 0 or 1s and one output, also of 0 or 1 (so equivalent to taking true / false inputs and returning a true / false output). So for example, a perceptron working as a classifier of whether something was a mammal or not, might have inputs representing lots of features of an animal. These would be coded as 1 to mean that feature was true of the animal or 0 to mean false: INPUT: “A cow gives birth to live young” (true: 1), “A cow has feathers” (false: 0), “A cow has hair” (true: 1), “A cow lays eggs” (false: 0), “etc. OUTPUT: (true: 1) meaning a cow has been classified as a mammal.

A perceptron makes decisions by applying weightings to all the inputs that increase the importance of some, and lesson the importance of others. It then adds the results together also adding in a fixed value, bias. If the sum it calculates is greater then or equal to 0 then it outputs 1, otherwise it outputs 0. Each perceptron has different values for the bias and the weightings, depending on what it does. A simple perceptron is just computing the following bit of code for inputs in1, in2, in3 etc (where we use a full stop to mean multiply):

IF bias + w1.in1 + w2.in2 + w3.in3 ... >= 0 
THEN OUTPUT O 
ELSE OUTPUT 1

Because it uses binary (1s and 0s), this version is called a binary classifier. You can set a perceptron’s weights, essentially programming it to do a particular job, or you can let it learn the weightings (by applying learning algorithms to the weightings). In the latter case it learns for itself the right answers. Here, we are interested in the fundamental limits of what perceptrons could possibly learn to do, so do not need to focus on the learning side just on what a perceptron’s limits are. If we can’t program it to do something then it can’t learn to do it either!

Machines made of lots of perceptrons were created and experiments were done with them to show what AIs could do. For example, Rosenblatt built one called Tobermory with 12,000 weights designed to do speech recognition. However, you can also explore the limits of what can be done computationally through theory: using maths and logic, rather than just by invention and experiments, and that kind of theoretical computer science was what others did about perceptrons. A key question in theoretical computer science about computers is “What is computable?” Can your new invention compute anything a normal computer can? Alan Turing had previously proved an important result about the limits of what any computer could do, so what about an artificial intelligence made of perceptrons? Could it learn to do anything a computer could or was it less powerful than that?

As a perceptron is something that takes 1s and 0s and returns a 1 or 0, it is a way of implementing logic: AND gates, OR gates, NOT gates and so on. If it can be used to implement all the basic logical operators then a machine made of perceptrons can do anything a computer can do, as computers are built up out of basic logical operators. So that raises a simple question, can you actually implement all the actual logical operators with perceptrons set appropriately. If not then no perceptron machine will ever be as powerful as a computer made of logic gates! Two of the giants of the area Marvin Minsky and Seymour Papert investigated this. What they discovered contributed to the AI winter (but only because the result was misunderstood!)

Let us see what it involves. First, can we implement an AND gate with appropriate weightings and bias values with a perceptron? An AND gate has the following truth table, so that it only outputs 1 if both its inputs are 1:

Truth table for an AND gate

So to implement it with a perceptron, we need to come up with positive or negative number for, bias, and other numbers for w1 and w2, that weight the two inputs. The numbers chosen need to lead to it giving output 1 only when the two inputs (in1 and in2) are 1 and otherwise giving output, 0.

bias + w1.in1 + w2.in2 >= 0 when in1 = 1 AND in2 = 1
bias + w1.in1 + w2.in2 < 0 otherwise

See if you can work out the answer before reading on.

A perceptron for an AND gate needs values set for bias, w1 and w2

It can be done by setting the value of b to -2 and making both weightings, w1 and w2, value 1. Then, because the two inputs, in1, and in2 can only be 1 or 0, it takes both inputs being 1 to overcome b’s value of -2 and so raise the sum up to 0:

bias + w1.in1 + w2.in2 >= 0
-2 + 1.in1 + 1.in2 >= 0
-2 + 1.1 + 1.1 >=0
A perceptron implementing an AND gate

So far so good. Now, see if you can work out weightings to make an OR gate and a NOT gate.

Truth table for an OR gate
Truth table for a NOT gate

It is possible to implement both OR and NOT gate as a perceptron (see answers at the end).

However, Minsky and Papert proved that it was impossible to create another kind of logical operator, an XOR gate, with any values of bias and weightings in a perceptron. This a logic gate that has output 1 if its inputs are different, and outputs 0 if its inputs are the same.

Truth table for an XOR gate

Can you prove it is impossible?

They had seemingly shown that a perceptron could not compute everything a computer could. Perceptrons were not as expressive so not as powerful (and never could be as powerful) as a computer. There were things they could never learn to do, as there were things as simple as an XOR gate that they could not represent. This led some to believe the result meant AIs based on perceptrons were a dead end. It was better to just work with traditional computers and traditional computing (which by this point were much faster anyway). Along with the way that the promises of AI had been over-hyped with exaggerated expectations and the applications that had emerged so far had been fairly insignificant, this seemingly damming theoretical blow on top of all that led to funding for AI research drying up.

However, as current machine learning tools show, it was never that bad. The theoretical result had been misunderstood, and research into neural networks based on perceptrons eventually took off again in the 1990s

Minsky and Papert’s result is about what a single perceptron can do, not about what multiple ones can do together. More specifically, if you have perceptrons in a single layer, each with inputs just feeding its own outputs, the theoretical limitations apply. However, if you make multiple layers of perceptrons, with the outputs of one layer of perceptrons feeding into the next, the negative result no longer applies. After all, we can make AND, OR and NOT gates from perceptrons, and by wiring them together so the outputs of one are the inputs of the next one, then we can build an XOR gate just as we can with normal logic gates!

An XOR gate from layers of perceptrons set as AND, OR and NOT operators

We can therefore build an XOR gate from perceptrons. We just need multi-layer perceptrons, an idea that was actually known about in the 1960s including by Minsky and Papert. However, without funding, making further progress became difficult and the AI winter started where little research was done on any kind of Artificial Intelligence, and so little progress was made.

The theoretical result about the limits of what perceptrons could do was an important and profound one, but the limitations of the result needed to be understood too, and that means understanding the assumptions it is based on (it is not about multi-layer perceptrons. Now AI is back, though arguably being over-hyped again, so perhaps we should learn from the past!. Theoretical work on the limits of what neural networks can and can’t do is an active research area that is as vital as ever. Let’s just make sure we understand what results mean before we jump to any conclusions. Right now theoretical results about AI need more funding not a new winter!

– Paul Curzon, Queen Mary University of London

This article is based on a introductory segment of a research seminar on the expressive power of graph neural networks by Przemek Walega, Queen Mary University of London, October 2025.

More on …

Answers

An OR gate perceptron can be made with bias = -1, w1 = w2 = 1

A NOT gate perceptron can be made with bias = 0, w1 = -1

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Sweet Learning Computer: Learning Ladder

The board for the ladder game with the piece on the bottom rung
The Ladder board. Image by Paul Curzon

Can a machine learn from its mistakes, until it plays a game perfectly, just by following rules? Donald Michie worked out a way in the 1960s. He made a machine out of matchboxes and beads called MENACE that did just that. Our version plays the game Ladder and is made of cups and sweets. Punish the machine when it loses by eating its sweets!

Let’s play the game, Ladder. It is played on a board like a ladder with a single piece (an X) placed on the bottom rung of the ladder. Players take it in turns to make a move, either 1, 2 or 3 places up the ladder. You win if you move the piece to the top of the ladder, so reach the target. We will play on a ladder with 10 rungs as on the right (but you can play on larger ladders).

To make the learning machine, you need 9 plastic cups and lots of wrapped sweets coloured red, green and purple. Spread out the sheets showing the possible board positions (see below) and place a cup on each. Put coloured sweets in each cup to match the arrows: for most positions there are red, green and purple arrows, so you put a red, green and purple sweet in those cups. Once all cups have sweets matching the arrows, your machine is ready to play (and learn).

The machine plays first. Each cup sits on a possible board position that your machine could end up in. Find the cup that matches the board position the game is in when it is its go.  Shut your eyes and take a sweet at random from that cup, placing it next to the cup. Make the move indicated by the arrow of that colour. Then the machine’s human opponent makes a move. Once they have moved the machine plays in the same way again, finding the position and taking a sweet to decide its move. Keep playing alternately like this until someone wins. If the machine ends up in a position with no sweets in that cup, then it resigns.

The possible board positions showing possible moves with coloured arrows.
The 9 board positions with arrows showing possible moves. Place a cup on each board position with sweets corresponding to the arrows. Image by Paul Curzon

If the machine loses, then eat the sweet corresponding to the last move it made. It will never make that mistake again! Win or lose, put all the other sweets back.

The initial cup for board position 8, with a red and purple sweet.
The initial cup for board position 8, with a red and purple sweet. Image by Paul Curzon

Now, play lots of games like that, punishing the machine by eating the sweet of its last move each time it loses. The machine will play badly at first. It’s just making moves at random. The more it loses, the more sweets (losing moves) you eat, so the better it gets. Eventually, it will play perfectly. No one told it how to win – it learnt from its mistakes because you ate its sweets! Gradually the sweets left encode rules of how to win.

Try slightly different rules. At the moment we just punish bad moves. You could reward all the moves that led to it by adding another sweet of the same colour too. Now the machine will be more likely to make those moves again. What other variations of rewards and punishments could you try?

Why not write a program that learns in the same way – but using data values in arrays to represent moves instead of sweets. Not so yummy!

– Paul Curzon, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Brassed Off

The pit head of a colliery at sunset with a vivid red sky behind the setting sun
Image from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Brassed Off.

***SPOILER ALERT***

Brassed Off, starring Pete Postlethwaite, Tara Fitzgerald and Ewan McGregor, is set at a time when the UK coal and steel industries were being closed down with terrible effects on local communities across the North of England and Wales. It tells the story of the closing of the fictional Grimley Pit (based on the real mining village of Grimethorpe), from the point of view of the members of the colliery brass band and their families. The whole village relies on the pit for their livelihoods.

Danny, the band’s conductor is passionate about the band and wants to keep it going, even if the pit closes. Many of the other band members are totally despondent and just want to take the money that is on offer if they agree to the closure without a fight. They feel they have no future, and have given up hope over both the pit and the band (why have a colliery band if there is no colliery?)

Gloria, a company manager who grew up in the village arrives, conducting a feasibility study for the company to determine if the pit is profitable or not as justification for keeping it open or closing it down. A wonderful musician, she joins the band but doesn’t tell them that she is now management (including not telling her childhood boyfriend, and band member, Andy).

The story follows the battle to keep the pit open, and the effects on the community if it closes, through the eyes of the band members as they take part in a likely final ever brass band competition…

Brassed Off: with computer science

In our computer science film future version, the pit is still closing and Gloria is still management, but with a Computer Science PhD in digital music, she has built a flugelhorn playing robot with a creative AI brain. It can not only play brass band instruments but arrange and compose too. On arriving at Grimley she asks if her robot can join the band. Initially, every one is against the idea, but on hearing how good it is, and how it will help them do well in the national brass band competition they relent. The band, with robot, go all the way to the finals and ultimately win…

The pit, however, closes and there are no jobs, at all, not even low quality work in local supermarkets (automatic tills and robot shelf-stackers have replaced humans) or call centres (now replaced by chatbots). Gloria also loses her job due to a shake-out of middle managers as the AIs take over the knowledge economy jobs. Luckily, she is ok, as with university friends, she starts a company building robot musicians which is an amazing success. The band never make the finals again as bands full of Gloria’s flugelhorn and cornet playing robots take over (also taking the last of the band’s self-esteem). In future years, all the brass bands in the competition are robot bands as with all the pits closing the communities around them collapse. The world’s last ever flugelhorn player is a robot. Gloria and Andy never do get to kiss…

In real life…

Could a robot play a musical instrument? One existed centuries before the computer age. In 1737  Jacques de Vaucanson revealed his flute playing automaton to the public. A small human height figure, it played a real flute, that could be replaced to prove the machine could really play a real instrument. Robots have played various instruments, including drums and a cello playing robot that played with an orchestra in Malmo. While robot orchestras and bands are likely, it seems less likely that humans would stop playing as a result.

Can an AI compose music? Victorian, Ada Lovelace predicted they one day would, a century before the first computer was ever built. She realised that this would be the case just from thinking about the machines that Charles Babbage was trying to build. Her prediction eventually came true. Now of course, generative AI is being used to compose music, and can do so in any style, whether classical or pop. How good, or creative, it is may be debated but it won’t be long before they have super-human music composition powers.

So, a flugelhorn playing robot, that also composes music, is not a pipe dream!

What about the social costs that are the real theme of the film though? When the UK pits and steelworks closed whole communities were destroyed with great, and long lasting, social cost. It was all well and good for politicians to say there are new jobs being created by the new service and knowledge economy, but that was no help when no thought or money had actually been put in to helping communities make the transition. “Get on your bike” was their famous, if ineffective, solution. For example, if the new jobs were to be in technology as suggested then massive technology training programmes for those put out of work were needed, along with financial support in the meantime. Instead, whole communities were effectively left to rot and inequality increased massively. Areas in the North of England and Wales that had been the backbone of the UK economy, still haven’t really recovered 40 years later.

Are we about to make the same mistakes again? We are certainly arriving at a similar point, but now it is those knowledge economy jobs that were supposed to be the saviours 40 years ago that are under threat from AI. There may well be new jobs as old ones disappear…but even if they do will the people who lose their jobs be in a position to take the new ones, or are we heading towards a whole new lost generation. As back then, without serious planning and support, including successful efforts to reduce inequality in society, the changes coming could again cause devastation, this time much more widespread. As it stands technology is increasing, not decreasing, inequality. We need to start now, including coming up with a new economic model of how the world will work that actively reduces inequality in society. Many science fiction writers have written of utopian futures where people only work for fun (eg Arthur C Clarke’s classic “Childhood’s End” is one I’m reading at the moment), but that only happens if wealth is not sucked up by the lucky few. (In “Childhood’s End” it takes alien invaders to force out inequality.)

We can avoid a dystopian future, but only if we try…really hard.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Google’s “PigeonRank” and arty-pigeon intelligence

pigeon
Pigeon, possibly pondering people’s photographs.
Image by Davgood Kirshot from Pixabay

On April Fool’s Day in 2002 Google ‘admitted’ to its users that the reason their web search results appeared so quickly and were so accurate was because, rather than using automated processes to grab the best result, Google was actually using a bank of pigeons to select the best results. Millions of pigeons viewing web pages and pecking picking the best one for you when you type in your search question. Pretty unlikely, right?

In a rather surprising non-April Fool twist some researchers decided to test out how well pigeons can distinguish different types of information in hospital photographs.

Letting the pigeons learn from training data
They trained pigeons by getting them to view medical pictures of tissue samples taken from healthy people as well as pictures taken from people who were ill. The pigeons had to peck one of two coloured buttons and in doing so learned which pictures were of healthy tissue and which were diseased. If they pecked the correct button they got an extra food reward.

Seeing if their new knowledge is ‘generalisable’ (can be applied to unfamiliar images)
The researchers then tested the pigeons with a fresh set of pictures, to see if they could apply their learning to pictures they’d not seen before. Incredibly the pigeons were pretty good at separating the pictures into healthy and unhealthy, with an 80 per cent hit rate. Doctors and pathologists* probably don’t have to worry too much about pigeons stealing their jobs though as the pigeons weren’t very good at the more complex cases. However this is still useful information. Researchers think that they might be able to learn something, about how humans learn to distinguish images, by understanding the ways in which pigeons’ brains and memory works (or don’t work). There are some similarities between pigeons’ and people’s visual systems (the ways our eyes and brains help us understand an image).

[*pathology means the study of diseases. A pathologist is a medical doctor or clinical scientist who might examine tissue samples (or images of tissue samples) to help doctors diagnose and treat diseases.]

How well can you categorise?

This is similar to a way that some artificial intelligences work. A type of machine learning called supervised learning gives an artificial intelligence system a batch of photographs labelled ‘A’, e.g. cats, and a different batch of photographs labelled ‘B’, e.g. dogs. The system makes lots of measurements of all the pictures within the two categories and can use this information to decide if a new picture is ‘CAT’ or ‘DOG’ and also how confident it is in saying which one.

Can pigeons tell art apart?

Pigeons were also given a button to peck and shown artworks by Picasso or Monet. At first they’d peck the button randomly but soon learned that they’d get a treat if they pecked at the same time they were shown a Picasso. When a Monet appeared they got no treat. After a while they learned to peck when they saw the Picasso artworks and not peck when shown a Monet. But what happened if they were shown a Monet or Picasso painting that they hadn’t seen before? Amazingly they were pretty good, pecking for rewards when the new art was by Picasso and ignoring the button when it was a new Monet. Art critics can breathe a sigh of relief though. If the paintings were turned upside down the pigeons were back to square one and couldn’t tell them apart.

Like pigeons, even humans can get this wrong sometimes. In 2022 an art curator realised that a painting by Piet Mondrian had been displayed upside down for 75 years… I wonder if the pigeons would have spotted that.

– Jo Brodie, Queen Mary University of London

Share this post


Part of a series of ‘whimsical fun in computing’ to celebrate April Fool’s (all month long!).

Find out about some of the rather surprising things computer scientists have got up to when they're in a playful mood.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hiroshi Kawano and his AI abstract artist

Piet Mondrian is famous for his pioneering pure abstract paintings that consist of blocks of colour with thick black borders. This series of works is iconic now. You can buy designs based on them on socks, cards, bags, T-shorts, vases, and more, He also inspired one of the first creative art programs. Written by Hiroshi Kawano it created new abstract art after Mondrian.

An Artificial Mondrian style picture of blocks of primary colours with blck borders.
Image by CS4FN after Mondrian inspired by Artificial Mondrian

Hiroshi Kawano was himself a pioneer of digital and algorithmic art. From 1964 he produced a series of works that were algorithmically created in that they followed instructions to produce the designs, but those designs were all different as they included random number generators – effectively turning art into a game of chance, throwing dice to see what to do next. Randomness can be brought in in this way to make decisions about the sizes, positions, shapes and colours in the images, for example.

His Artificial Mondrian series from the late 1960s were more sophisticated than this though. He first analysed Mondrian’s paintings determining how frequently each colour appeared in each position on the canvas. This gave him a statistical profile of real Mondrian works. His Artificial Mondrian program then generated new designs based on coloured rectangles but where the random number generator matched the statistical pattern of Mondrian’s creative decisions when choosing what block of colour to paint in an area. The dice were in effect loaded to match Mondrian’s choices. The resulting design was not a Mondrian, but had the same mathematical signature as one that Mondrian might paint. One example KD 29 is on display at the Tate modern this year (2025) until June 2025 as part of the Electric Dreams exhibition (you can also buy a print from the Tate Modern Shop).

Kawano’s program didn’t actually paint, it just created the designs and then Hiroshi did the actual painting following the program’s design. Colour computer printers were not available then but the program could print out the patterns of black rectangles that he then coloured in.

Whilst far simpler, his program’s approach prefigures the way modern generative AI programs that create images work. They are trained on vast numbers of images, from the web, for example. They then create a new image based on what is statistically likely to match the prompt given. Ask for a cat and you get an image that statistically matches existing images labelled as cats. Like his the generative AI programs are also combining algorithm, statistics from existing art, and randomness to create new images.

Is such algorithmic art really creative in the way an artist is creative though? It is quite easy (and fun) to create your own Mondrian inspired art, even without an AI. However, the real creativity of an artist is in coming up with such a new iconic and visually powerful art style in the first place, as Piet Mondrian did, not in just copying his style. The most famous artists are famous because they came up with a signature style. Only when the programs are doing that are they being as creative as the great human artists. Hiroshi Kawano’s art (as opposed to his program’s) perhaps does pass the test as he came up with a completely novel medium for creating art. That in itself was incredibly creative at the time.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos