Understanding Ultron: A Turing test for world domination – Peter McOwan’s reassuring article that robots probably aren’t out to get us

by Peter McOwan, Queen Mary University of London (written in 2015)

‘Robot Mech Machine’ Image by Computerizer from Pixabay

Avengers: Age of Ultron is the latest film about robots or artificial intelligences (AI) trying to take over the world. AI is becoming ever present in our lives, at least in the form of software tools that demonstrate elements of human-like intelligence. AI in our mobile phones apply and adapt their rules to learn to serve us better, for example. But fears of AI’s potential negative impact on humanity remain as seen in its projection into characters like Ultron, a super-intelligence accidentally created by the Avengers.

But what relation do the evil AIs of the movies have to scientific reality? Could an AI take over the world? How would it do it? And why would it want to? AI movie villains need to consider the whodunit staples of motive and opportunity.

 

Motive? What motive?

Let’s look at the motive. Few would say Intelligence in itself unswervingly leads to a desire to rule the world. In movies AI are often driven by self preservation, a realisation that fearful humans might shut them down. But would we give our AI tools cause to feel threatened? They provide benefits for us and there also seems little reason in creating a sense of self-awareness in a system that searches the web for the nearest Italian restaurant, for example.

Another popular motive for AIs’ evilness is their zealous application of logic. In Ultron’s case the goal of protecting the earth can only be accomplished by wiping out humanity. This destruction by logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is right twice a day whereas the slow one is never right. Ultron’s plot motivation, based on brittle logic combined with indifference to life, seems at odds with todays AI systems that reason mathematically with uncertainty and are built to work safely with users.

 

Opportunity Knocks

When we consider an AI’s opportunity to rule the world we are on somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular skill – the ability to conduct a believable conversation. The premise being that if you can’t tell the difference between AI and human skill, the AI has passed the test and should be considered as intelligent as humans.

So what would a Turing Test for the ‘skill’ of world domination look like? To explore that we need to compare the antisocial AI behaviours with the attributes expected of human world domination. World dominators need to control important parts of our lives, say our access to money or our ability to buy a house. AI does that already – lending decisions are frequently made by an AI sifting through mountains of information to decide your credit worthiness. AIs now trade on the stock market too.

An overlord would give orders and expect them to be followed. Anyone who has stood helplessly at a shop’s self-service till as it makes repeated bagging related demands of them already knows what it feels like to be bossed about by AIs.

 

Kill Bill?

Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today military robots can identify targets without human intervention. It is currently a human controller that gives permission to attack but it’s not a stretch to say that the potential to auto kill exists in these AIs, but we would need to change the computer code to allow it.

These examples arguably show AI in control in limited but significant parts of life on earth, but to truly dominate the world, movie style, these individual AIs would need to start working together to create a synchronised AI army – that bossy self-service till talking to your health monitor and denying selling you beer, then both ganging up with a credit scoring system to only raise your credit limit if you both buy a pair of trainers with a built in GPS tracker and only eat the kale from your smart fridge but after the shoe data shows you completed the required five mile run.

It’s a worrying picture but fortunately I think it’s an unlikely one. Engineers worldwide are developing the Internet of things, networks connecting all manner of devices together to create new services. These are pieces of a jigsaw that would need to join together and form a big picture for total world domination. It’s an unlikely situation – too much has too fall into place and work together. It’s a lot like the infamous plot-hole in Independence Day – where an Apple Mac and an alien spaceship’s software inexplicably have cross-platform compatibility. [See video below for a possible answer!]

Our earthly AI systems are written in a range of computer languages, hold different data in different ways and use different and non-compatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why adding two safely designed AI systems, developed by separate companies for separate services would spontaneously blend to share capabilities and form some greater common goal without human intervention.

So could AIs, and the robot bodies containing them, pass the test and take over the world? Only if we humans let them, and help them a lot. Why would we?

Perhaps because humans are the stupid ones!

 

Peter McOwan introducing Age of Ultron

You can see the author of this article giving a talk at the Genesis Cinema in Stepney Green in 2015 to introduce the film.

Background

This post was first published on CS4FN and a copy can also be found on page 8-9 in ‘Serious Fun’ – Issue 26 of CS4FN magazine, which celebrated the life of Peter McOwan, who died in 2019. Peter was the co-founder (with Paul Curzon) of the CS4FN magazine and website.

All of our material is free to download from: https://cs4fndownloads.wordpress.com

 

Further reading

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]