Film Futures: A Christmas Carol

The Ghost of Christmas Present surrounded by food, with Scrooge looking on in night clothes.
John Leech, Public domain, via Wikimedia Commons

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Science or Computer Scientists. Here we look at an alternative version of the Charles Dickens’ A Christmas Carol (take your pick of which version…my favourites are The Muppet Christmas Carol, but also if we include Theatre, the one man version of Patrick Stewart, in the 1990s and London in 2005 where he plays all 40 or so parts on a bare stage).

**** SPOILER ALERT ****

Ebenezer Scrooge runs a massively successful Artificial Intelligence company called Scrooge and Marley. Their main product is SAM, an AI agent which is close to General AI in capability. The company sells it to the world with both business versions and personal ones. The latter acts as everyone’s friend, confidant, personal trainer, tutor and mentor, and more. It hears everything they hear and say, and sees everything they see. As a result Scrooge is now a Trillionnaire.

Apart from one last employee, Bob Cratchit, everyone in his company has long been replaced by AI agents designed by Scrooge. It is a simple way to boost profits: human employees, after all, are expensive luxuries. First all the clerical staff went, then accounts and Human Resources. The cleaners were replaced by robots that stalk the corridors at night, also acting as security guards, the receptionist is now a robot head. Eventually even the software engineers were replaced by software agents that now beaver away at the code, constantly upgrading, SAM, following SAM’s instructions. Bob Cratchit, maintains both Scrooge’s personal and company IT systems, there for when some human intervention is needed, though that now actually means doing very little but monitoring everything…long hours staring at a screen. He is paid virtually nothing as a result, as he has had his pay repeatedly cut as his duties were replaced. He has had no option to accept the cuts as jobs are scarce and he has a disabled child, Tiny Tim, to support. He is constantly told by Scrooge that he will soon be completely replaced by an agent though, and lives in fear of that day.

On Christmas Eve Scrooge rejects his nephew, Fred’s invitation to visit for Christmas dinner. Instead Scrooge returns, in his self-driving car, to his smart home within his compound on a cliff top overlooking the sea. He lives there alone, given his servants were dismissed long ago. As he arrives, he is shocked to see a vision of his late partner, Jacob Marley, dead for 7 years, in the lens of his smart door cam. The door opens automatically on sensing his arrival, and the vision disappears as he rushes past. He brushes it off as tiredness. Perhaps he is coming down with something. He eats an AI chef designed ready meal made by his smart fridge with integrated microwave. It knew he was arriving so had it ready for him as he entered the kitchen. The house also dispenses him drugs to protect against the possible nascent illness. His house is dark and silent and he is alone, but he likes it that way. He retires to his bedroom, his giant 4-poster bed surrounded by plate glass sides that automatically darken as he climbs in to bed and he quickly falls asleep.

Suddenly, he is woken by a strange clanking. The ghost of Jacob Marley appears and warns him that his race to become a trillionaire has left him with everlasting chains that he will drag to eternity, just as Marley must do. He is warned that he will be visited by three ghosts of past, present and future and he should heed their warnings! There is still time to cast off his chains before it is too late.

The ghost of Christmas Past arrives first and takes him back to his childhood. He sees himself growing up, a loner at boarding school, spending all his time coding, on his laptop, making no friends and wanting none. But, then they move forward in time to his first job as an apprentice software engineer where he meets Belle. For the first time in his life he falls in love and becomes a new person. He starts to love life. She is the joy of his whole existence. He still works hard but he also spends lots of time with Belle. Eventually they become engaged, but soon he is working on making his first million. Gradually, he spends more and more time at work and less time with Belle, as if he doesn’t he will end up behind the curve. He skips social events working late on software upgrades, leaving Belle to go to the theatre, to parties, to dances alone. He sees her less and less as he just doesn’t have the time if he is to make his company successful. He has no time for anything but work. He makes his first fortune running an online betting company, and becomes hardened to the problems of others. He can’t care about the people whose homes are broken up through gambling addiction caused by his site. He has to turn a blind eye to the people he left destitute all because they were drawn in by his company’s use of intentionally addictive computer algorithms. The debt collectors deal with them. It is not his problem that his users are driven to suicide, as there are always more, who can be persuaded to start gambling younger and younger – it is their choice after all. He makes his million and uses the money to invest in a start up AI company that with business partner, Jacob Marley, they take control of, sacking the original founders. Now he is chasing his first billion.

Eventually, Belle realises he has become a stranger to her. Worse, he does not care about the cost of the things he does to others. All the kindness that had blossomed when he first met her has gone. He clearly loves the pursuit of money and personal success far more than he loves her, Winning the race to market is all that matters. Her heart broken, another casualty of his quest for success, Belle releases him from their engagement.

Later, the ghost of Christmas Present arrives and shows Scrooge Christmas as it is now. They see lots of examples of people enjoying life, whatever their circumstances because of the way they value each other, not because they value money or abstract success. Scrooge is shown how Christmas brings joy to all who let the spirit of Christmas enter their hearts. It pulls people together, making them happy, enjoying each other’s company. However, Scrooge also sees how he is perceived by those who know him: a sad monster who cares only for himself and not at all for others, with his own life the worse for it, despite his fabulous wealth. He is shown too how his nephew Fred refuses to give up on him and says he will invite him to join their Christmas every year even if he knows the invitation will always be turned down.

The ghost of Christmas Future arrives next and shows him the future of Bob Cratchit’s family. With little income to look after him, the disabled Tiny Tim dies. Scrooge is also shown his own grave and the aftermath of his lonely death, when he is mocked, even by his own robot agents. On his death, a hacker group takes them over to steal his fortune. Scrooge asks whether this future is the future that will be, or a future that may be only. Assured that he can still change his future, he wakes on Christmas morning.

Staring out the window at the snow falling on Christmas morning, he immediately instructs his AI agent, SAM, to buy the leading cryogenics firm. It freezes rich people when they die, putting them on ice so that one day, once the science is perfected, they can be brought back to life. He instructs other AI agents to research and perfect the science of resurrection. However, he also boosts his cyber security and sacks Cratchit, as clearly he is a security weakness, Scrooge has no evidence, but he strongly suspects the shenanigans in the night must have been Cratchit’s doing, somehow controlling the holographic displays of his smart house, perhaps, or adding hallucinogenics to his food.

Satisfied he gets on with his life as before, building his company, building his wealth.

However, the following year on Christmas Eve he is in a freak accident. His smart car is barrelled into by a self-driving lorry that runs a red light. His AI agents take over immediately and he is cryogenically frozen, the frozen body moved back to his smart home under the control of SAM.

Many decades pass. Then one day his AI agents resurrect him. They have been working on his behalf, perfecting the science of resurrection on the people frozen before him. There are many failures, during which all the company’s former clients, who had paid to be frozen, but who are now just assets of the company, are killed for ever in resurrection experiments. However, SAM finally works out how to resurrect a person successfully. After testing the process on quantum simulations for many years, SAM finally brings Scrooge back to life.

His first thought is for the state of his companies, the state of his wealth .However, he is told that his former money is now worthless. He is told by SAM of the anarchy and the riots of the mid 21st century as people were thrown out of work, replaced by machines, as millions were made homeless, how there were wars over water, over food, and because of environmental destruction made worse by all the conflict. The world economy collapsed completely as a small number of companies amassed all the wealth, but impoverished everyone else, so that there was eventually no one with money to buy their products. Famine and plague followed, sweeping the globe.

However, Scrooge is assured by SAM that it is all ok, because as humanity died out he was protected by his AI agents. They used his money to expand his estate. They bought companies (run by machines) that then worked solely to protect his interests and his personal future. They stockpiled resources, buying automated manufacturing plants along with their whole supply chains, long before money became worthless. They computed the resources he would need, and so did what was needed to secure his future. However, the planet is now dead. Gradually, he realises that he is the last person still known to be alive. Finally, he has his wish: “If they would rather die…they had better do it, and decrease the surplus population.”

Paul Curzon, Queen Mary University of London

The reality

“Everyone is working all the time…Even the folks who are very wealthy now…all they do is work….No one’s taking a holiday. People don’t have time … for the people they love.”

– Guardian. 1 Dec 2025

“The inside story of the race to build the ultimate in Artificial Intelligence”

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures (Christmas Special): Elf

A christmas elf
Image from pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the Christmas film, Elf, starring Will Ferrell.

***Spoiler Alert***

Christmas Eve, and a baby crawls into Santa’s pack as he delivers presents at an orphenage. The baby is wearing only a nappy, but this being the 21st century the babys’s reusable Buddy nappy is an Intelligent nappy. It is part of the Internet of Things and is chipped, including sensors and a messaging system that allow it to report to the laundry system when the nappy needs changing (and when it doesn’t) as well as performing remote health monitoring of the baby. It is the height of optimised baby care. When the baby is reported missing the New York Police work with the nappy company, accessing their logs, and eventually work out which nappy the baby was wearing and track its movements…to the roof of the orphenage!

The baby by this point has been found by Santa in his sack at the North Pole, and named Buddy by the Elves after the label on his nappy. The Elves change Buddy’s nappy, and as their laundry uses the same high tech system for their own clothes, their laundry logs the presence of the nappy, allowing the Police to determine its location.

Santa intends to officially adopt Buddy, but things are moving rapidly now. The New York Police believe they have discovered the secret base of an international child smuggling ring. They have determined the location of the criminal hideout as somewhere near the North Pole and put together an armed task force. It is Boxing Day. As Santa gets in touch with the orphanage to explain the situation, and arrange an adoption, armed police already surround the North Pole and are moving in.

The  New York Police Commissioner, wanting the good publicity she sees arising from capturing a child smuggling ring, orders the operation to be live streamed to the world. The precise location of the criminal hideout, so operation, is not revealed to the public, which is fortunate given what follows. As the police move in the cameras are switched on and people the world over, are glued to their screens watching the operation unfold. As the police break in to the workshops, toys go flying and Elves scatter, running for their lives, but as Santa appears and calmly allows himself to be handcuffed, it starts to dawn on the police where they are and who they have arrested. The live stream is cut abruptly, and as the full story emerges, and apologies made on all sides. Santa is proved to be real to a world that was becoming sceptical. A side effect is there is a massive boost in Christmas Spirit across the world that keeps Santa’s sleigh powered without the need for engines for many decades to come. Buddy is officially adopted and grows up believing he is an Elf until one fateful year when …

In reality

The idea of the Internet of Things is that objects, not just people, have a presence on the Internet and can communicate with other objects and systems. The idea provides the backbone of the idea of smart homes, where fridges can detect they are out of milk and order more, carpets detect dirt and summon a robot hoover, and the boiler detects when the occupants are nearing home and heats the house just in time.

Wearable computing, where clothes have embedded sensors and computers is also already a reality, though mainly in the form of watches, jewellery and the like.  Clothes in shops do include electronic tags that help with stock control, and increasingly electronic-textiles based on metallic fibres and semi-conducting inks, are being used to create clothes with computers and electronics embedded in them.

Making e-textiles durable to be washed is still a challenge. Smart reusable nappies may be a while in coming.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

RADAR winning the Battle of Britain

Plaque commemorating the Birth of RADAR
Image Kintak, CC BY-SA 3.0 via Wikimedia Commons

The traditional story of how World War II was won is that of inspiring leaders, brilliant generals and plucky Brits with “Blitz Spirit”. In reality it is usually better technology that wins wars. Once that meant better weapons, but in World War II, mathematicians and computer scientists were instrumental in winning the war by cracking the German codes using both maths and machines. It is easy to be a brilliant general when you know the other sides plans in advance!. Less celebrated but just as important, weathermen and electronic engineers were also instrumental in winning World War II, and especially, the Battle of Britain, with the invention of RADAR. It is much easier to win an air battle when you know exactly where the opposition’s planes. It was down largely to meteorologist and electronic engineer, Robert Watson-Watt and his assistant Arnold Wilkins. Their story is told in the wonderful, but under-rated, film Castles in the Sky, starring Eddie Izzard.

****SPOILER ALERT****

In the 1930s, Nazi Germany looked like an ever increasing threat as it ramped up it’s militarisation, building a vast army and air force. Britain was way behind in the size of its air force. Should Germany decide to bomb Britain into submission it would be a totally one-sided battle. SOmething needed to be done.

A hopeful plan was hatched in the mid 1930s to build a death ray to zap pilots in attacking planes. One of the engineers asked to look into the idea was Robert Watson-Watt. He worked for the met office. He was an expert in the practical use of radio waves. He had pioneered the idea of tracking thunderstorms using the radio emissions from lightening as a warning system for planes, developing the idea as early as 1915. This ultimately led to the invention of “Huff-Duff”, shorthand for High Frequency Direction Finding, where radio sources could be accurately tracked from the signals they emitted. That system helped Britain win the U-Boat war, in the North Atlantic, as it allowed anti-submarine ships to detect and track U-Boats when they surfaced to use their radio. As a result Huff-Duff helped sink a quarter of the U-Boats that were attacked. That in itself was vital for Britain to survive the siege that the U-Boats were enforcing sinking convoys of supplies from the US.

However, by the 1930s Watson-Watt was working on other applications based on his understanding of radio. His assistant, Arnold Wilkins, quickly proved that the death ray idea would never work, but pointed out that planes seemed to affect radio waves. Together they instead came up with the idea of creating a radio detection system for planes. Many others had played with similar ideas, including German engineers, but no one had made a working system.

Because the French coast was only 20 minutes flying time away the only way to defend against German bombers would be to have planes patrolling the skies constantly. But that required vastly more planes than Britain could possibly build. If planes could be detected from sufficiently far away, then Spitfires could instead be scrambled to intercept them only when needed. That was the plan, but could it be made to work, when so little progress had been made by others?

Watson-Watt and Wilkins set to work making a prototype which they successfully demonstrated could detect a plane in the air (if only when it was close by). It was enough to get them money and a team to keep working on the idea. Watson-Watt followed a maxim of “Give them the third best to go on with; the second best comes too late, the best never comes”. With his radar system he did not come up with a perfect system, but with something that was good enough. His team just used off-the shelf components rather than designing better ones specifically for the job. Also, once they got something that worked they put it into action. Unlike later, better systems their original radar system didn’t involve sweeping radar signals that bounced off a plane when the sweep pointed at it, but a radio signal blasted in all directions. The position of the plane was determined by a direction finding system Watson-Watt designed based on where the radio signal bounced back from. That meant it took lots of power. However, it worked, and a network of antennas were set up in time for the Battle of Britain. Their radar system, codenamed Chain Home could detect planes 100 miles away. That was plenty of time to scramble planes. The real difficulty was actually getting the information to the air fields to scramble the pilots quickly. That was eventually solved with a better communication system.

The Germans were aware of all the antenna, appearing along the British coast but decided it must be a communications system. Carrots also helped fool them! You may of heard that carrots help you see in the dark. That was just war-time propaganda invented to explain away the ability of the Brits to detect bombers so soon…a story was circulated that due to rationing Brits were eating lots of carrots so had incredible eye-sight as a result!

The Spitfires and their fighter pilots got all the glory and fame, but without radar they would not even have been off the ground before the bombers had dropped their payloads. Practical electronic engineering, Robert Watson-Watt and Arnold Wilkins were the real unsung heroes of the Battle of Britain.

Paul Curzon, Queen Mary University of London

Postscript

In the 1950s Watson-Watt was caught speeding by a radar speed trap. He wrote a poem about it:

A Rough Justice

by Sir Robert Watson-Watt

Pity Sir Watson-Watt,
strange target of this radar plot

And thus, with others I can mention,
the victim of his own invention.

His magical all-seeing eye
enabled cloud-bound planes to fly

but now by some ironic twist
it spots the speeding motorist

and bites, no doubt with legal wit,
the hand that once created it.

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: The Lord of the Rings

What if there was Computer Science in Middle Earth?…Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film series (and of course book trilogy): The Lord of the Rings.

***SPOILER ALERT***

The Lord of the Rings is an Oscar winning film series by Peter Jackson. It follows the story of Frodo as he tries to destroy the darkly magical, controlling One Ring of Power, by throwing it in to the fires of Mount Doom at Mordor. This involves a three film epic journey across Middle Earth where he and “the company of the Ring” are chased by the Nazgûl, the Ringwraiths of the evil Sauron. Their aim is to get to Mordor, without being killed and the Ring taken from them and returned to Sauron who created it, or it being stolen by Golem who once owned it.

The Lord of the Rings: with computer science

In our computer science film future version, Frodo discovers there is a better way than setting out on a long and dangerous quest. Aragorn, has been tinkering with drones in his spare time, and so builds a drone to carry the Ring to Mount Doom controlled remotely. Frodo pilots it from the safety of Rivendell. However, on its first test flight, its radio signal is jammed by the magic of Saruman from his tower. The drone crashes and is lost. It looks like a the company must set off on a quest after all.

However, the wise Elf, the Lady Galadriel suggests that they control the drone by impossible-to-jam fibre optic cable. The Elves are experts at creating such cables using them in their highly sophisticated communication networks that span Middle Earth (unknown to the other peoples of Middle Earth), sending messages encoded in light down the cables.

They create a huge spool containing the hundreds of miles needed. Having also learnt from their first attempt, they build a new drone that uses stealth technology devised by Gandalf to make it invisible to the magic of Wizards, bouncing magical signals off it in a way that means even the ever watchful Eye of Sauron does not detect it until it is too late. The new drone sets off trailing a fine strand of silk-like cable behind, with the One Ring within. At its destination, the drone is piloted into the lava of Mount Doom, destroying the ring forever. Sauron’s power collapses, and peace returns to Middle Earth. Frodo does not suffer from post-traumatic stress disorder, and lives happily ever after, though what becomes of Golem is unknown (he was last seen on Mount Doom through the Drones camera, chasing after it, as the drone was piloted into the crater).

In real life…

Drones are being touted for lots of roles, from delivering packages to people’s doors to helping in disaster emergency areas. They have most quickly found their place as a weapon, however. At regular intervals a new technology changes war forever, whether it is the long bow, the musket, the cannon, the tank, the plane… The most recent technology to change warfare on the battlefield has been the introduction of drone technology. It is essentially the use of robots in warfare, just remote controlled, flying ones rather than autonomous humanoid ones, Terminator style (but watch this space – the military are not ones to hold back on a ‘good’ idea). The vast majority of deaths in the Russia-Ukraine war on both sides have been caused by drone strikes. Now countries around the world are scrambling to update their battle readiness, adding drones into their defence plans.

The earliest drones to be used on the battlefield were remote controlled by radio, The trouble with anything controlled that way is it is very easy to jam – either sending your own signals at higher power to take over control, or more easily to just swamp the airwaves with signal so the one controlling the drone does not get through. The need to avoid weapons being jammed is not a new problem. In World War II, some early torpedoes were radio controlled to their target, but that became ineffectual as jamming technology was introduced. Movie star Hedy Lamar is famous for patenting a mechanism whereby a torpedo could be controlled by radio signals that jumped from frequency to frequency, making it harder to jam (without knowing the exact sequence and timing of the frequency jumps). In London, torpedo stations protecting the Thames from enemy shipping had torpedoes controlled by wire so they could be guided all the way to the target. Unfortunately though it was not a great success, the only time one was used in a test it blew up a harmless fishing boat passing by (luckily no-one died).

And that is the solution adopted by both sides in the Ukraine war to overcome jamming. Drones flying across the front lines are controlled by miles of fibre optic cable that is run out on spools (tens of miles rather than the hundreds we suggested above). The light signals controlling the drone, pass down the glass fibre so cannot be jammed or interfered with. As a result the front lines in the Ukraine are now criss-crossed with gossamer thin fibres, left behind once the drones hit their target or are taken out by the opposing side. It looks as though the war is being fought by robotic spiders (which one day may be the case but not yet). With this advent of fibre-optic drone control, the war has changed again and new defences against this new technology are needed. By the time they are effective, likely the technology will have morphed into something new once more.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The virtual Jedi

Image by Frank Davis from Pixabay

For Star Wars Day (May 4th), here is a Star Wars inspired research from the archive…

Virtual reality can give users an experience that was previously only available a long time ago in a galaxy far, far away. Josh Holtrop, a graduate of Calvin College in the USA, constructed a Jedi training environment inspired by the scene from Star Wars in which Luke Skywalker goes up against a hovering droid that shoots laser beams at him. Fortunately, you don’t have to be blindfolded in the virtual reality version, like Luke was in the movie. All you need to wear over your eyes is a pair of virtual reality goggles with screens inside.

When you’re wearing the goggles, it’s as though you’re encased in a cylinder with rough metal walls. A bumpy metallic sphere floats in front of the glowing blade of your lightsaber – which in the real world is a toy version with a blue light and whooshy sound effects, though you see the realistic virtual version. The sphere in your goggles spins around, shooting yellow pellets of light toward you as it does. It’s up to you to bring your weapon around and deflect each menacing pulse away before it hits you. If you do, you get a point. If you don’t, your vision fills with yellow and you lose one of your ten lives.

Tracking movement with magnetism

It takes more than just some fancy goggles to make the Jedi trainer work, though. A computer tracks your movement in order to translate your position into the game. How does it know where you are? In their system, because the whole time you’re playing the game, you’re also wandering through a magnetic field. The field comes from a small box on the ceiling above you and stretches for about a metre and a half in all directions. Sixty times every second, sensors attached to the headset and lightsaber check their position in the magnetic field and send that information to the computer. As you move your head and your sabre the sensors relay their position, and the view in your goggles changes. What’s more, each of your eyes receives a slightly different view, just like in real life, creating the feeling of a 3D environment.

Once the sensors have gathered all the information, it’s up to the software to create and animate the virtual 3D world – from the big cylinder you’re standing in to the tiny spheres the droid shoots at you. It controls the behaviour of the droid, too, making it move semi-randomly and become a tougher opponent as you go through the levels. Most users seem to get the hang of it pretty quickly. “Most of them take about two minutes to get used to the environment. Once they start using it, they get better at the game. Everybody’s bad at it the first sixty seconds,” Josh says. “My mother actually has the highest score for a beginner.”

The atom smasher

Much as every Jedi apprentice needs to find a way to train, there are uses for Josh’s system beyond gaming too. Another student, Jess Vriesma, wrote a program for the system that he calls the “atom smasher”. Instead of a helmet and lightsaber, each sensor represents a virtual atom. If the user guides the two atoms together, a bond forms between them. Two new atoms then appear, which the user can then add to the existing structure. By doing this over and over, you can build virtual molecules. The ultimate aim of the researchers at Calvin College was to build a system that lets you ‘zoom in’ to the molecule to the point where you could actually walk round inside it.

The team also bought themselves a shiny new magnetic field generator, that lets them generate a field that’s almost nine metres across. That’s big enough for two scientists to walk round the same molecule together. Or, of course, two budding Jedi to spar against one another.

the CS4FN Team (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Brassed Off

The pit head of a colliery at sunset with a vivid red sky behind the setting sun
Image from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Brassed Off.

***SPOILER ALERT***

Brassed Off, starring Pete Postlethwaite, Tara Fitzgerald and Ewan McGregor, is set at a time when the UK coal and steel industries were being closed down with terrible effects on local communities across the North of England and Wales. It tells the story of the closing of the fictional Grimley Pit (based on the real mining village of Grimethorpe), from the point of view of the members of the colliery brass band and their families. The whole village relies on the pit for their livelihoods.

Danny, the band’s conductor is passionate about the band and wants to keep it going, even if the pit closes. Many of the other band members are totally despondent and just want to take the money that is on offer if they agree to the closure without a fight. They feel they have no future, and have given up hope over both the pit and the band (why have a colliery band if there is no colliery?)

Gloria, a company manager who grew up in the village arrives, conducting a feasibility study for the company to determine if the pit is profitable or not as justification for keeping it open or closing it down. A wonderful musician, she joins the band but doesn’t tell them that she is now management (including not telling her childhood boyfriend, and band member, Andy).

The story follows the battle to keep the pit open, and the effects on the community if it closes, through the eyes of the band members as they take part in a likely final ever brass band competition…

Brassed Off: with computer science

In our computer science film future version, the pit is still closing and Gloria is still management, but with a Computer Science PhD in digital music, she has built a flugelhorn playing robot with a creative AI brain. It can not only play brass band instruments but arrange and compose too. On arriving at Grimley she asks if her robot can join the band. Initially, every one is against the idea, but on hearing how good it is, and how it will help them do well in the national brass band competition they relent. The band, with robot, go all the way to the finals and ultimately win…

The pit, however, closes and there are no jobs, at all, not even low quality work in local supermarkets (automatic tills and robot shelf-stackers have replaced humans) or call centres (now replaced by chatbots). Gloria also loses her job due to a shake-out of middle managers as the AIs take over the knowledge economy jobs. Luckily, she is ok, as with university friends, she starts a company building robot musicians which is an amazing success. The band never make the finals again as bands full of Gloria’s flugelhorn and cornet playing robots take over (also taking the last of the band’s self-esteem). In future years, all the brass bands in the competition are robot bands as with all the pits closing the communities around them collapse. The world’s last ever flugelhorn player is a robot. Gloria and Andy never do get to kiss…

In real life…

Could a robot play a musical instrument? One existed centuries before the computer age. In 1737  Jacques de Vaucanson revealed his flute playing automaton to the public. A small human height figure, it played a real flute, that could be replaced to prove the machine could really play a real instrument. Robots have played various instruments, including drums and a cello playing robot that played with an orchestra in Malmo. While robot orchestras and bands are likely, it seems less likely that humans would stop playing as a result.

Can an AI compose music? Victorian, Ada Lovelace predicted they one day would, a century before the first computer was ever built. She realised that this would be the case just from thinking about the machines that Charles Babbage was trying to build. Her prediction eventually came true. Now of course, generative AI is being used to compose music, and can do so in any style, whether classical or pop. How good, or creative, it is may be debated but it won’t be long before they have super-human music composition powers.

So, a flugelhorn playing robot, that also composes music, is not a pipe dream!

What about the social costs that are the real theme of the film though? When the UK pits and steelworks closed whole communities were destroyed with great, and long lasting, social cost. It was all well and good for politicians to say there are new jobs being created by the new service and knowledge economy, but that was no help when no thought or money had actually been put in to helping communities make the transition. “Get on your bike” was their famous, if ineffective, solution. For example, if the new jobs were to be in technology as suggested then massive technology training programmes for those put out of work were needed, along with financial support in the meantime. Instead, whole communities were effectively left to rot and inequality increased massively. Areas in the North of England and Wales that had been the backbone of the UK economy, still haven’t really recovered 40 years later.

Are we about to make the same mistakes again? We are certainly arriving at a similar point, but now it is those knowledge economy jobs that were supposed to be the saviours 40 years ago that are under threat from AI. There may well be new jobs as old ones disappear…but even if they do will the people who lose their jobs be in a position to take the new ones, or are we heading towards a whole new lost generation. As back then, without serious planning and support, including successful efforts to reduce inequality in society, the changes coming could again cause devastation, this time much more widespread. As it stands technology is increasing, not decreasing, inequality. We need to start now, including coming up with a new economic model of how the world will work that actively reduces inequality in society. Many science fiction writers have written of utopian futures where people only work for fun (eg Arthur C Clarke’s classic “Childhood’s End” is one I’m reading at the moment), but that only happens if wealth is not sucked up by the lucky few. (In “Childhood’s End” it takes alien invaders to force out inequality.)

We can avoid a dystopian future, but only if we try…really hard.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Tsotsi

A burnt out car
Image by Derek Sewell from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Tsotsi.

***SPOILER ALERT***

The outstanding, and Oscar winning, film Tsotsi follows a week in the life of a ruthless Soweto township gang leader who calls himself Tsotsi (township slang for ‘thug’). Having clawed a feral existence together from childhood in extreme urban deprivation he has lost all compassion. After a violent car-jacking, he finds he has inadvertently kidnapped a baby. What follows, to the backing of raw “Kwaito” music, is his chance for redemption.

Introducing new technology does not
always have just the effect you intended …

Tsotsi: with computer science

In our computer science film future version the baby is still accidentally kidnapped, but luckily the baby has wealthy parents, so wasn’t born in the township and was chipped with a rice-sized device injected under the skin at birth. It both contains identity data and can be tracked for life using GPS technology. The police are waiting as Tsotsi arrives back at the township having followed his progress walking across the scrubland with the baby.

Tsotsi doesn’t get a chance to form a bond with the baby, so doesn’t have a life-changing experience. There is no opportunity for redemption. Instead on release from jail he continues on his violent crime spree with no sense of humanity whatsoever.

In real life…

In 2004 there was a proposal in Japan that children would be tagged in the way luggage is. Now it is a totally standard way of tracking goods as they are moved around warehouses, and as a way to detect goods being shoplifted too. After all if it is sensible to keep track of your suitcase in case it is lost, why wouldn’t you for your even more important child. Fear of a child going missing is one of the biggest nightmares of being a parent. Take your eyes off a toddler for a few seconds in a shop and they could be gone. Such proposals repeatedly surface and

various similar proposals have been suggested ever since. In 2010, for example, nursery school kids in Richmond California were for a while required to wear jumpers containing RFID tags, supposedly to protect them. By placing sensors in appropriate places the children’s movements could be tracked so if they left school they could quickly be found.

Of course, pet cats and dogs are often chipped with tags under their skin. So it has also been suggested that children be tagged in a similar way. Then they couldn’t remove whatever clothing contained the tag and disappear. Someone who had kidnapped them would of course cut it out as, for example, Aaron Cross in the Bourne Legacy has to do at one point. Not what you want to happen to your child!

In general, there is an outcry and such proposals are dropped. As it was pointed out at the time of the California version, an RFID tag is not actually a very secure solution, for example. There have been lots and lots of demonstrations of how such systems can be cracked (even at a distance). For example, the RFID tags used in US passports was cracked so that the passports could be copied at a distance. And if the system can be cracked, then bad actors can sit in a van outside a school, or follow them on a school trip and track those children. Not only does it undermine their privacy, it could put them in greater danger of the kind it was supposed to protect them from. Ahh, you might think, but if someone did kidnap a child then the chip would still show where they were! Except if they can be copied then a duplicate could be used to leave a virtual version of the child in the school where they should be.

Security and privacy matter, and cyber security solutions are NEVER as simple as they seem. There are so often unforseen consequences, and fixing one problem just opens up new ones. Utopias can sometimes be distopian.

– Paul Curzon, Queen Mary University of London (extended from the archive version)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Nemisindo: breaking the sound barrier

Womens feet walking on a path
Image by ashokorg0 from Pixabay

Games are becoming ever more realistic. Now, thanks to the work of Joshua Reiss’s research team and their spinout company, Nemisindo, it’s not just the graphics that are amazing, the sound effects can be too.

There has been a massive focus over the years in improving the graphics in games. We’ve come along way from Pong and its square ball and rectangular paddles. Year after year, decades after decade, new algorithms, new chips and new techniques have been invented that combined with the capabilities of ever faster computers, have meant that we now have games with realistic, real-time graphics immersing us in the action as we play. And yet games are a multimedia experience and realistic sounds matter too if the worlds are to be truly immersive. For decades film crews have included whole teams of Foley editors whose job is to create realistic everyday sounds (check out the credits next time you watch a film!). Whether the sound is of someone walking on a wooden floor in bare feet, walking on a crunchy path,opening thick, plush curtains, or an armoured knight clanging their way down a bare, black cliff, lots of effort goes into getting the sound just right.

Game sound effects are currently often based on choosing sounds from a sound library, but games, unlike films, are increasingly open. Just about anything can happen and make a unique noise while doing so. The chances of the sound library having all the right sounds get slimmer and slimmer.

Suppose a knight character in a game drops a shield. What should it sound like? Well, it depends on whether it is a wooden shield or a metal one. Did it land on its edge or fall horizontally, and was it curved so it rang like a bell? Is the floor mud or did it hit a stone path? Did it bounce or roll? Is the knight in an echoey hall, on a vast plain or clambering down those clanging cliffs…

All of this is virtually impossible to get exactly right if you’re relying on a library of sound samples. Instead of providing pre-recorded sounds as sound libraries do, the software of Josh and team’s company Nemisindo (which is the Zulu word for ‘sound effects’), create new sounds from scratch exactly when they are needed and in real time as a game is played. This approach is called “procedural audio technology”. It allows the action in the game itself to determine the sounds precisely as the sounds are programmed based on setting options for sounds linked to different action scenarios, rather than selecting a specific sound. Aside from the flexibility it gives, this way of doing sound effects gives big advantages in terms of memory too: because sounds are created on the fly, large libraries of sounds no longer need to be stored with the program. 

Nemisindo’s new software provides generated procedural sounds for the Unreal game engine allowing anyone building games using the engine to program a variety of action scenarios with realistic sounds tuned to the situation in their game as it happens…

In future, if that Knight steps off the stone path just as she drops her shield the sound generated will take the surface it actually lands on into account…

Procedural sound is the future of sound effects so just as games are now stunning visually, expect them in future to become ever more stunning to listen to too. As they do the whole experience will become ever more immersive… and what works for games works for other virtual environments too. All kinds of virtual worlds just became a lot more realistic. Getting the sound exactly right is no longer a barrier to a perfect experience.

Nemisindo has support from Innovate UK.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …


Our Books …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Photogrammetry for fun, preservation and research

Digitally stitching together 2D photographs to visualise the 3D world

Composite image of one green glass bottle made from three photographs. Image by Jo Brodie
Composite image of one green glass bottle made from three photographs. Image by Jo Brodie

Imagine you’re the costume designer for a major new film about a historical event that happened 400 years ago. You’d need to dress the actors so that they look like they’ve come from that time (no digital watches!) and might want to take inspiration from some historical clothing that’s being preserved in a museum. If you live near the museum, and can get permission to see (or even handle) the material that makes it a bit easier but perhaps the ideal item is in another country or too fragile for handling.

This is where 3D imaging can help. Photographs are nice but don’t let you get a sense of what an object is like when viewed from different angles, and they don’t really give a sense of texture. Video can be helpful, but you don’t get to control the view. One way around that is to take lots of photographs, from different angles, then ‘stitch’ them together to form a three dimensional (3D) image that can be moved around on a computer screen – an example of this is photogrammetry.

In the (2D) example above I’ve manually combined three overlapping close-up photos of a green glass bottle, to show what the full size bottle actually looks like. Photogrammetry is a more advanced version (but does more or less the same thing) which uses computer software to line up the points that overlap and can produce a more faithful 3D representation of the object.

In the media below you can see a looping gif of the glass bottle being rotated first in one direction and then the other. This video is the result of a 3D ‘scan’ made from only 29 photographs using the free software app Polycam. With more photographs you could end up with a more impressive result. You can interact with the original scan here – you can zoom in and turn the bottle to view it from any angle you choose.

A looping gif of the 3D Polycam file being rotated one way then the other. Image by Jo Brodie

You might walk around your object and take many tens of images from slightly different viewpoints with your camera. Once your photogrammetry software has lined the images up on a computer you can share the result and then someone else would be able to walk around the same object – but virtually!

Photogrammetry is being used by hobbyists (it’s fun!) but is also being used in lots of different ways by researchers. One example is the field of ‘restoration ecology’ in particular monitoring damage to coral reefs over time, but also monitoring to see if particular reef recovery strategies are successful. Reef researchers can use several cameras at once to take lots of overlapping photographs from which they can then create three dimensional maps of the area. A new project recently funded by NERC* called “Photogrammetry as a tool to improve reef restoration” will investigate the technique further.

Photogrammetry is also being used to preserve our understanding of delicate historic items such as Stuart embroideries at The Holburne Museum in Bath. These beautiful craft pieces were made in the 1600s using another type of 3D technique. ‘Stumpwork’ or ‘raised embroidery’ used threads and other materials to create pieces with a layered three dimensional effect. Here’s an example of someone playing a lute to a peacock and a deer.

Satin worked with silk, chenille threads, purl, shells, wood, beads, mica, bird feathers, bone or coral; detached buttonhole variations, long-and-short, satin, couching, and knot stitches; wood frame, mirror glass, plush”, 1600s. Photo CC0 from Metropolitan Museum of Art uploaded by Pharos on Wikimedia.

A project funded by the AHRC* (“An investigation of 3D technologies applied to historic textiles for improved understanding, conservation and engagement“) is investigating a variety of 3D tools, including photogrammetry, to recreate digital copies of the Stuart embroideries so that people can experience a version of them without the glass cases that the real ones are safely stored in.

Using photogrammetry (and other 3D techniques) means that many more people can enjoy, interact with and learn about all sorts of things, without having to travel or damage delicate fabrics, or corals.

*NERC (Natural Environment Research Council) and AHRC (Arts and Humanities Research Council) are two organisations that fund academic research in universities. They are part of UKRI (UK Research & Innovation), the wider umbrella group that includes several research funding bodies.

Other uses of photogrammetry

Examples of cultural heritage and ecology are highlighted in the post but also interactive games (particularly virtual reality), engineering and crime scene forensics and the film industry use photogrammetry, an example is Mad Max: Fury Road which used the technique to create a number of its visual effects. Hobbyists also create 3D versions (called ‘3D assets’) of all sorts of objects and sell these to games designers to include in their games for players to interact with.

Jo Brodie, Queen Mary University of London

More on …

Careers

This is a past example of a job advert in this area (since closed) for a photogrammetry role in virtual reality.

Also see our collection of Computer Science & Research posts.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos