Ethics – What would you do? Part 2: answers

by Peter McOwan, Queen Mary University of London

Yesterday we published ‘Ethics – What would you do?‘ which had a poll at the end where readers could pick one of three options. If you’ve not selected your option you might like to do that first before reading on…

The answers

If you picked Option 1

1) Go ahead and launch. After all, there are still plenty of parts to the game that do work and are fun, there will always be some errors, and for this game in particular thousands have been signing up for text alerts to tell them when it’s launched. It will make many thousands happy.

That means you follow an ethical approach called ‘Act utilitarianism’.

Act Happy

The main principle of this theory, put forward by philosopher John Stuart Mill, is to create the most happiness (another name for happiness here is utility thus utilitarianism). For each situation you behave (act) in a way that increases the happiness of the largest number of people, and this is how you decide what is wrong or right. You may take different actions in similar situations. So you choose to launch a flawed game if you know that you have pre-sales of a hundred thousand, but another time decide to not launch a different flawed game where there are only one thousand pre-sales, as you wont be making so many people unhappy. It’s about considering the utility for each action you take. There is no hard and fast rule.

If you picked Option 2

2) Cancel the launch until the game is fixed properly, no one should have to buy a game that doesn’t work 100 per cent.

That means you follow an ethical approach called ‘Duty Theory’

Do your Duty

Duty theories are based on the idea of there being universal principles, such as ‘you should never ever lie, whatever the circumstances’. This is also known as the dentological approach to ethics (philosophers like to bring in long words to make simple things sound complicated!). The German philosopher Emanuel Kant was one of the main players in this field. His ‘Categorical Imperative’ (like I said long words…) said “only act in a way that you would want everyone else to act” (…simple idea!). So if you don’t think there should ever be mistakes in software then don’t make any yourself. This can be quite tough!

If you picked Option 3

3) Go ahead and launch. After all it’s almost totally working and the customers are looking forward to it. There will always be some errors in programs: it’s part of the way complicated software is, and a delay to game releases leads to disappointment.

You would be following the approach called ‘Rule utilitarianism’.

Spread a little happiness

Say something nice to everyone you meet today…it will drive them crazy

The main principle of this flavour of utilitarianism theory, put forward by philosopher Jeremy Bentham, is to create the most happiness (happiness here is called utility thus utilitarianism). You follow general rules that increase the happiness of the largest number of people, and this is how you decide what’s wrong or right. So in our dilemma the rule could be ‘even if the game isn’t 100% correct, people are looking forward to it and we can’t disappoint them’. Here the rule increases happiness, and we apply it again in the future if the same situation occurs.



Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Ethics – What would you do?

by Peter McOwan, Queen Mary University of London

You often hear about unethical behaviours, be it in politicians or popstars, but getting to grips with ethics, which deals with issues about what behaviours are right and wrong, is an important part of computer science too. Find out about it and at the same time try our ethical puzzle below and learn something about your own ethics…

Is that legal?

Ethics are about the customs and beliefs that a society has about the way people should be treated. These beliefs can be different in different countries, sometimes even between different regions of the same country, which is why it’s always important to know something about the local area when going on holiday. You don’t want to upset the local folk. Ethics tend to form the basis of countries’ laws and regulations, combining general agreement with practicality. Sticking your tongue out may be rude and so unethical, but the police have better things to do than arrest every rude school kid. Similarly, slavery was once legal, but was it ever ethical? Laws and ethics also have other differences; individuals tend to judge unethical behaviour, and shun those who behave inappropriately, while countries judge illegal behaviour – using a legal system of courts, judges and juries to enforce laws with penalties.

Dilemmas, what to do?

Now imagine you have the opportunity to go treading on the ethical and legal toes of people across the world from the PC in your home. Suddenly the geographical barriers that once separated us vanish. The power of computer science, like any technology, can be used for good or evil. What is important is that those who use it understand the consequences of their actions, and choose to act legally and ethically. Understanding legal requirements, for example contracts, computer misuse and data protection are important parts of a computer scientist’s training, but can you learn to be ethical?

Computer scientists study ethics to help them prepare for situations where they have to make decisions. This is often done by considering ethical dilemmas. These are a bit like the computer science equivalent of soap opera plots. You have a difficult problem, a dilemma, and have to make a choice. You suddenly discover you have a unknown long lost sister living on the other side of the Square, do you make contact or not, (on TV this choice is normally followed by a drum roll as the episode ends).

Give it a go

Here is your chance to try an ethical dilemma for yourself. Read the alternatives and choose what you would do in this situation. Then click on the poll choice. Like all good ‘personality tests’ you find out something about yourself: in this case which type of ethical approach you have in the situation according to some famous philosophers. There are also some fascinating facts to impress your mates. We’ll share the answers tomorrow.

Your Dilemma and your ethical personality

You are working for a company who are about to launch a new computer game. The adverts have gone out, the newspapers and TV are ready for the launch … then the day before you are told that there is a bug, a mistake, in the software. It means players sometimes can’t kill the dragon at the end of the game. If you hit the problem the only solution is to start the final level again. It can be fixed they think but it will take about a week or so to track it down. The computer code is hard to fix as it’s been written by 10 different people and 5 of them have gone on a back-packing holiday so can’t be contacted.

Answers tomorrow!


This article was first published on the original CS4FN website and a copy also appears on page 6 of issue 26 of the CS4FN magazine: Peter McOwan: Serious Fun, celebrating the life and research interests of Peter, who died in 2019. You can download a free PDF copy of the magazine below as well as our entire range of back issues of magazines and booklets at our CS4FN download site.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Blade: the emotional computer.

by Paul Curzon, Queen Mary University of London

Communicating with computers is clunky to say the least – we even have to go to IT classes to learn how to talk to them. It would be so much easier if they went to school to learn how to talk to us. If computers are to communicate more naturally with us we need to understand more about how humans interact with each other.

The most obvious ways that we communicate is through speech – we talk, we listen – but actually our communication is far more subtle than that. People pick up lots of information about our emotions and what we really mean from the expressions and the tone of our voice – not from what we actually say. Zabir, a student at Queen Mary was interested in this so decided to experiment with these ideas for his final year project. He used a kit called Lego Mindstorm that makes it really easy to build simple robots. The clever stuff comes in because, once built, Mindstorm creations can be programmed with behaviour. The result was Blade.

In the video above you can see Blade the robot respond

Blade, named after the Wesley Snipes film, was a robotic face capable of expressing emotion and responding to the tone of the user’s voice. Shout at Blade and he would look sad. Talk softly and, even though he could not understand a word of what you said he would start to appear happy again. Why? Because your tone says what you really mean whatever the words – that’s why parents talk gobbledegook softly to babies to calm them.

Blade was programmed using a neural network, a computer science model of the way the brain works, so he had a brain similar to ours in some simple ways. Blade learnt how to express emotions very much like children learn – by tuning the connections (his neurons) based on his experience. Zabir spent a lot of time shouting and talking softly to Blade, teaching him what the tone of his voice meant and so how to react. Blade’s behaviour wasn’t directly programmed, it was the ability to learn that was programmed.

Eventually we had to take Blade apart which was surprisingly sad. He really did seem to be more than a bunch of lego bricks. Something about his very human like expressions pulled on our emotions: the same trick that cartoonists pull with the big eyes of characters they want us to love.

Zabir went on to work in the city for Merchant Bank, JP Morgan


⬇️ This article has also been published in two CS4FN magazines – first published on p13 in Issue 4, Computer Science and BioLife, and then again on page 18 in Issue 26 (Peter McOwan: Serious Fun), our magazine celebrating the life and research of Peter McOwan (who co-founded CS4FN with Paul Curzon and researched facial recognition). There’s also a copy on the original CS4FN website. You can download free PDF copies of both magazines below, and any of our other magazines and booklets from our CS4FN Downloads site.

This video below Why faces are special from Queen Mary University of London asks the question “How does our brain recognise faces? Could robots do the same thing?”.

Peter McOwan’s research into face recognition informed the production of this short film. Designed to be accessible to a wide audience, the film was selected as one of the finalist 55 from 1450 films submitted to the festival CERN CineGlobe film festival 2012.

Related activities

We have some fun paper-based activities you can do at home or in the classroom.

  1. The Emotion Machine Activity
  2. Create-A-Face Activity
  3. Program A Pumpkin

See more details for each activity below.

1. The Emotion Machine Activity

From our Teaching London Computing website. Find out about programs and sequences and how how high-level language is translated into low-level machine instructions.

2. Create-A-Face Activity

Fom our Teaching London Computing website. Get people in your class (or at home if you have a big family) to make a giant robotic face that responds to commands.

3. Program A Pumpkin

Especially for Hallowe’en, a slightly spookier, pumpkin-ier version of The Emotion Machine above.


Related Magazine …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Beheading Hero’s mechanical horse – an early ‘magical’ (nearly headless) automaton from Ancient Greece

by Paul Curzon, Queen Mary University of London

Stories of Ancient Greece abound with myths but also of amazing inventions. Some of the earliest automatons, mechanical precursors of robots, were created by the Ancient Greeks. Intended to delight and astound or be religious idols, they brought statues of animals and people to life. One story holds that Hero of Alexandria invented a magical, mechanical horse that not only moved and drank water, but was also impossible to behead. It just carried on drinking as you sliced a sword clean through its neck. The head remained solidly attached to body. Myth or Mystery? How could it be done?

The Ancient Greeks were clever. With many inventions we think of as modern, the Greeks got there first. They even invented the first known computer. Hero of Alexandria was one of the cleverest, an engineer and prolific inventor. Despite living in the first century, he invented the first known steam engine (long before the famous ones from the start of the industrial revolution), the first vending machine, a musical instrument that was the first wind-powered machine, and even the pantograph, a parallelogram structure used to make exact copies of drawings, enlarged or reduced. Did Hero invent a magical mechanical horse? He did, and you really could slice cleanly through its robotic neck with a sword, leaving the head in place.

Magic, myth and mystery

Queen Mary’s Peter McOwan* was fascinated by magic and especially Hero’s horse as a child, and was keen to build one. When TEMI, a European project was funded he had his chance. TEMI aimed to bring more showmanship, magic and mystery to schools to increase motivation. By making lessons more like detective work, solving mysteries, they can be lots more fun. The project needed lots of mysteries, just like Hero’s horse, and artist Tim Sargent was commissioned to recreate the horse.

If you’re ever in Athens, you can see a version of Hero’s horse, as well as many other Greek inventions at Kotsanas Museum of Ancient Greek Technology.

How does it work?

The challenge was to create a version that used only Ancient Greek technology – no electricity or electromagnets, only mechanical means like gears, bearings, levers, cogs and the like. It was actually done with a clever rotating wheel. As the sword slices through a gap in the neck, it always connects head and body together first in front, then behind the blade. Can you work out how it was done?

See a video of the mechanism in action below, with Peter introducing it.


This article was first published on the original CS4FN website and there is a copy in Issue 26 of the CS4FN magazine which is a memorial issue for *Peter McOwan, who died in June 2019. Peter, along with Paul Curzon, was one of the co-founders of CS4FN. You can download a free PDF copy of the magazine, called “Peter W McOwan: Serious Fun”, from our downloads site – along with copies of all of our other free material.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Manufacturing Magic

Cover of the twleve magicians of Osiris - eyes, lightening between hands, camel, pyramids

by Howard Williams, Queen Mary University of London

(From the archive)

Can computers lend a creative hand to the production of new magic tricks? That’s a question our team, led by Peter McOwan at Queen Mary, wrestled with.

The idea that computers can help with creative endeavours like music and drawing is nothing new – turn the radio on and the song you are listening to will have been produced with the help of a computer somewhere along the way, whether it’s a synthesiser sound, or the editing of the arrangement, and some music is created purely inside software. Researchers have been toiling away for years, trying to build computer systems that actually write the music too! Some of the compositions produced in this way are surprisingly good! Inspired by this work, we decided to explore whether computers could create magic.

The project to build creative software to help produce new magic tricks started with a magical jigsaw that could be rearranged in certain ways to make objects on its surface disappear. Pretty cool, but what part did the computer play? A jigsaw is made up of different pieces, each with four sides – the number of different ways all these pieces can be put together is very large; for a human to sit down and try out all the different configurations would take many hours (perhaps thousands, if not millions!). Whizzing through lots of different combinations is something a computer is very good at. When there are simply too many different combinations for even a computer to try out exhaustively, programmers have to take a different approach.

Evolve a jigsaw

A genetic algorithm is a program that mimics the biological process of natural selection. We used one to intelligently search through all the interesting combinations that the jigsaw might be made up from. A population of jigsaws is created, and is then ‘evolved’ via a process that evaluates how good each combination is in each generation, gradually weeding out the combinations that wouldn’t make good jigsaws. At the end of the process you hope to be left with a winner; a jigsaw that matches all the criteria that you are hoping for. In this particular case, we hoped to find a jigsaw that could be built in two different ways, but each with a different number of the same object in the picture, so that you could appear to make an object disappear and reappear again as you made and remade it. The idea is based on a very old trick popularised by Sam Lloyd, but our aim was to create a new version that a human couldn’t, realistically, have come up with, without a lot of free time on their hands!

To understand what role the computer played, we need to explore the Genetic Algorithm mechanism it used to find the best combinations. How did the computer know which combinations were good or bad? This is something creative humans are great at – generating ideas, and discarding the ones they don’t like in favour of ones they do. This creative process gradually leads to new works of art, be they music, painting, or magic tricks. We tackled this problem by first running some experiments with real people to find out what kind of things would make the jigsaw seem more ‘magical’ to a spectator. We also did experiments to find out what would influence a magician performing the trick. This information was then fed into the algorithm that searched for good jigsaw combinations, giving the computer a mechanism for evaluating the jigsaws, similar to the ones a human might use when trying to design a similar trick.

More tricks

We went on to use these computational techniques to create other new tricks, including a card trick, a mind reading trick on a mobile phone, and a trick that relies on images and words to predict a spectator’s thought processes. You can find out more including downloading the jigsaw at www.Qmagicworld.wordpress.com

Is it creative, though?

There is a lot of debate about whether this kind of ‘artificial intelligence’ software, is really creative in the way humans are, or in fact creative in any way at all. After all, how would the computer know what to look out for if the researchers hadn’t configured the algorithms in specific ways? Does a computer even understand the outputs that it creates? The fact is that these systems do produce novel things though – new music, new magic tricks – and sometimes in surprising and pleasing ways, previously not thought of.

Are they creative (and even intelligent)? Or are they just automatons bound by the imaginations of their creators? What do you think?

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Understanding Ultron: A Turing test for world domination – Peter McOwan’s reassuring article that robots probably aren’t out to get us

by Peter McOwan, Queen Mary University of London (written in 2015)

‘Robot Mech Machine’ Image by Computerizer from Pixabay

Avengers: Age of Ultron is the latest film about robots or artificial intelligences (AI) trying to take over the world. AI is becoming ever present in our lives, at least in the form of software tools that demonstrate elements of human-like intelligence. AI in our mobile phones apply and adapt their rules to learn to serve us better, for example. But fears of AI’s potential negative impact on humanity remain as seen in its projection into characters like Ultron, a super-intelligence accidentally created by the Avengers.

But what relation do the evil AIs of the movies have to scientific reality? Could an AI take over the world? How would it do it? And why would it want to? AI movie villains need to consider the whodunit staples of motive and opportunity.

 

Motive? What motive?

Let’s look at the motive. Few would say Intelligence in itself unswervingly leads to a desire to rule the world. In movies AI are often driven by self preservation, a realisation that fearful humans might shut them down. But would we give our AI tools cause to feel threatened? They provide benefits for us and there also seems little reason in creating a sense of self-awareness in a system that searches the web for the nearest Italian restaurant, for example.

Another popular motive for AIs’ evilness is their zealous application of logic. In Ultron’s case the goal of protecting the earth can only be accomplished by wiping out humanity. This destruction by logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is right twice a day whereas the slow one is never right. Ultron’s plot motivation, based on brittle logic combined with indifference to life, seems at odds with todays AI systems that reason mathematically with uncertainty and are built to work safely with users.

 

Opportunity Knocks

When we consider an AI’s opportunity to rule the world we are on somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular skill – the ability to conduct a believable conversation. The premise being that if you can’t tell the difference between AI and human skill, the AI has passed the test and should be considered as intelligent as humans.

So what would a Turing Test for the ‘skill’ of world domination look like? To explore that we need to compare the antisocial AI behaviours with the attributes expected of human world domination. World dominators need to control important parts of our lives, say our access to money or our ability to buy a house. AI does that already – lending decisions are frequently made by an AI sifting through mountains of information to decide your credit worthiness. AIs now trade on the stock market too.

An overlord would give orders and expect them to be followed. Anyone who has stood helplessly at a shop’s self-service till as it makes repeated bagging related demands of them already knows what it feels like to be bossed about by AIs.

 

Kill Bill?

Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today military robots can identify targets without human intervention. It is currently a human controller that gives permission to attack but it’s not a stretch to say that the potential to auto kill exists in these AIs, but we would need to change the computer code to allow it.

These examples arguably show AI in control in limited but significant parts of life on earth, but to truly dominate the world, movie style, these individual AIs would need to start working together to create a synchronised AI army – that bossy self-service till talking to your health monitor and denying selling you beer, then both ganging up with a credit scoring system to only raise your credit limit if you both buy a pair of trainers with a built in GPS tracker and only eat the kale from your smart fridge but after the shoe data shows you completed the required five mile run.

It’s a worrying picture but fortunately I think it’s an unlikely one. Engineers worldwide are developing the Internet of things, networks connecting all manner of devices together to create new services. These are pieces of a jigsaw that would need to join together and form a big picture for total world domination. It’s an unlikely situation – too much has too fall into place and work together. It’s a lot like the infamous plot-hole in Independence Day – where an Apple Mac and an alien spaceship’s software inexplicably have cross-platform compatibility. [See video below for a possible answer!]

Our earthly AI systems are written in a range of computer languages, hold different data in different ways and use different and non-compatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why adding two safely designed AI systems, developed by separate companies for separate services would spontaneously blend to share capabilities and form some greater common goal without human intervention.

So could AIs, and the robot bodies containing them, pass the test and take over the world? Only if we humans let them, and help them a lot. Why would we?

Perhaps because humans are the stupid ones!

 

Peter McOwan introducing Age of Ultron

You can see the author of this article giving a talk at the Genesis Cinema in Stepney Green in 2015 to introduce the film.

Background

This post was first published on CS4FN and a copy can also be found on page 8-9 in ‘Serious Fun’ – Issue 26 of CS4FN magazine, which celebrated the life of Peter McOwan, who died in 2019. Peter was the co-founder (with Paul Curzon) of the CS4FN magazine and website.

All of our material is free to download from: https://cs4fndownloads.wordpress.com

 

Further reading

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]