Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

AI owes us an explanation

Question mark and silhouette of hands holding a smart phone with question mark
Image by Chen from Pixabay

Why should AI tools explain why? Erhan Pisirir and Evangelia Kyrimi, researchers ar Queen Mary University of London explain why.

From the moment we start talking, we ask why. A three-year-old may ask fifty “whys” a day. ‘Why should I hold your hand when we cross the road?’ ‘Why do I need to wear my jacket?’ Every time their parent provides a reason, the toddler learns and makes sense of the world a little bit more.

Even when we are no longer toddlers trying to figure out why the spoon falls on the ground and why we should not touch the fire, it is still in our nature to question the reasons. The decisions and the recommendations given to us have millions of “whys” behind them. A bank might reject our loan application. A doctor might urge us to go to hospital for more tests. And every time, our instinct is to ask the same question: Why? We trust advice more when we understand it.

Nowadays the advice and recommendations come not only from other humans but also from computers with artificial intelligence (AI), such as a bank’s computer systems or health apps.  Now that AI systems are giving us advice and making decisions that affect our lives, shouldn’t they also explain themselves?

That’s the promise of Explainable AI: building machines that can explain their decisions or recommendations. These machines must be able to say what is decided, but also why, in a way we can understand.

From trees to neurons

For decades we have been trying to make machines think for us. A machine does not have the thinking, or the reasoning, abilities of humans. So we need to give instructions on how to think. When computers were less capable, these instructions were simpler. For example, it could look like a tree: think of a tree where each branch is a question with several possible answers, and each answer creates a new branch. Do you have a rash? Yes Do you have a temperature? Yes. Do you have nausea? Yes. Are the spots purple? Yes. If you push a glass against them do they fade away? No …  Go to the hospital immediately.

The tree of decisions naturally gives whys connected to the tips of the paths taken: You should go to the hospital because your collection of symptoms: having a rash of purple spots, a temperature and nausea and especially because they do not fade under a glass, mean that it is likely you have Meningitis. Because it is life-threatening and can get worse very quickly, you need to get to a hospital urgently. An expert doctor can check reasoning like this and decide whether that explanation is actually good reasoning about whether someone has Meningitis or not, or more to the point should rush to the hospital.

Humans made computers much more capable of more complex tasks over time. With this, their thinking instructions became more complex too. Nowadays they might look like more complicated networks instead of trees with branches. They might look like a network of neurons in a human brain, for example. These complex systems make computers great at answering more difficult questions successfully. But unlike looking at a tree of decisions, humans cannot understand how the computer reaches its final answer at a glance of its system of thinking anymore. It is no longer the case that following a simple path of branches through a decision tree gives a definite answer, never mind a why. Now there are loops and backtracks, splits and joins, and the decisions depend on weightings of answers not just a definite Yes or No. For example, with Meningitis, according to the NHS website, there are many more symptoms than above and they can appear in any order or not at all. There may not even be a rash, or the rash may fade when pressure is applied. It is complicated and certainly not as simple as our decision tree suggests (the NHS says “Trust your instincts and and do not wait for all the symptoms to appear or until a rash develops. You should get medical help immediately if you’re concerned about yourself or your child.”) Certainly, the situation is NOT simple enough to say from a decision tree, for example, “Do not worry, you do not have Meningitis because your spots are not purple and did fade in the glass test”. An explanation like that could kill someone. The decision has to be made from a complex web of inter-related facts. AI tools require you to just  trust their instincts!

Let us, for a moment, forget about branches and networks, and imagine that AI is a magician’s hat: something goes in (a white handkerchief) and something else at the tap of a wand magically pops out (a white rabbit).  With a loan application, for example, details such as your age, income, or occupation go in, and a decision comes out: approved or rejected.

Inside the magician’s hat

Nowadays researchers are trying to make the magician’s hat transparent so that you can have a sneak peek of what is going on in there (it shouldn’t seem like magic!). Was the rabbit in a secret compartment, did the magician move it from the pocket and put it in at the last minute or did it really appear out of nowhere (real magic)? Was the decision based on your age or income, or was it influenced by something that should be irrelevant like the font choice in your application?

Currently, explainable AI methods can answer different kinds of questions (though, not always effectively):

  • Why: Your loan was approved because you have a regular income record and have always paid back loans in the past.
  • Why not: Your loan application was rejected because you are 20 years old and are still a student,
  • What if: If you earned £1000 or more each month, your loan application would not have been rejected.

Researchers are inventing many different ways to give these explanations: for example, heat maps that highlight the most important pixels in an image, lists of pros and cons that show the factors for and against a decision, visual explanations such as diagrams or highlights, or natural-language explanations that sound more like everyday conversations.

What explanations are good for

The more interactions people have with AI, the more we see why AI explanations are important. 

  • Understanding why AI made a specific recommendation helps people TRUST the system more; for example, doctors (or patients) might want to know why AI flagged a tumour before acting on its advice. 
  • The explanations might expose if AI recommendations have discrimination and bias, increasing FAIRNESS. Think about the loan rejection scenario again, what if the explanation shows that the reason of AI’s decision was your race? Is that fair?
  • The explanations can help researchers and engineers with DEBUGGING, helping them understand and fix problems with AI faster.
  • AI explanations are also becoming more and more required by LAW. The General Data Protection Regulation (GDPR) gives people a “right to explanation” for some automated decisions, especially for high stake areas, such as healthcare and finance. 

The convincing barrister

One thing to keep in mind is that the presence of explanations does not automatically make an AI system perfect. Explanations themselves can be flawed. The biggest catch is when an explanation is convincing when it shouldn’t be. Imagine a barrister with charming social skills who can spin a story and let a clearly guilty client go free from charge. The AI explanations should not aim to be blindly convincing whether the AI is right or wrong. In the cases AI got it all wrong (and from time to time it will), the explanations should make this clear rather than falsely reassuring the human.

The future 

Explainable AI isn’t an entirely new concept. Decades ago, early expert systems in medicine already included “why” buttons to justify their advice. But only in recent years explainable AI has become a major trend, because of AI systems becoming more powerful and with the increase of concerns about AI surpassing human decision-making but potenitally making some bad decisions.

Researchers are now exploring ways to make explanations more interactive and human friendly, similarly to how we can ask questions to ChatGPT like ‘what influenced this decision the most?’ or ‘what would need to change for a different outcome?’ They are trying to tailor the explanation’s content, style and representation to the users’ needs.

So next time AI makes a decision for you, ask yourself: could it tell me why? If not, maybe it still has some explaining to do.

Erhan Pisirir and Evangelia Kyrimi, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

If you go down to the woods today…

A girl walking through a meadow full of flowers within woods
Image by Jill Wellington from Pixabay

In the 2025 RHS Chelsea Flower Show there was one garden that was about technology as well as plants: The Avanade Intelligent Garden  exploring how AI might be used to support plants. Each of the trees contained probes that sensed and recorded data about them which could then be monitored through an App. This takes pioneering research from over two decades ago a step further, incorporating AI into the picture and making it mainstream. Back then a team led by Yvonne Rogers built an ambient wood aiming to add excitement to a walk in the woods...

Mark Weiser had a dream of ‘Calm Computing’ and while computing sometimes seems ever more frustrating to use, the ideas led to lots of exciting research that saw at least some computers disappearing into the background. His vision was driven by a desire to remove the frustration of using computers but also the realization that the most profound technologies are the ones that you just don’t notice. He wanted technology to actively remove frustrations from everyday life, not just the ones caused by computers. He wrote of wanting to “make using a computer as refreshing as taking a walk in the woods.”

Not calm, but engaging and exciting!

No one argues that computers should be frustrating to use, but Yvonne Rogers, then of the Open University, had a different idea of what the new vision could be. Not calm. Anything but calm in fact (apart from frustrating of course). Not calm, but engaging and exciting!

Her vision of Weiser’s tranquil woods was not relaxing but provocative and playful. To prove the point her team turned some real woods in Sussex into an ‘Ambient Wood’. The ambient wood was an enhanced wood. When you entered it you took probes with you, that you could point and poke with. They allowed you to take readings of different kinds in easy ways. Time hopping ‘Periscopes’ placed around the woods allowed you to see those patches of woodland at other times of the year. There was also a special woodland den where you could then see the bigger picture of the woods as all your readings were pulled together using computer visualisations.

Not only was the Ambient Wood technology visible and in your face but it made the invisible side of the wood visible in a way that provoked questions about the wildlife. You noticed more. You saw more. You thought more. A walk in the woods was no longer a passive experience but an active, playful one. Woods became the exciting places of childhood stories again but now with even more things to explore.

The idea behind the Ambient Wood, and similar ideas like Bristol’s Savannah project, where playing fields are turned into African Savannah, was to revisit the original idea of computers but in a new context. Computers started as tools, and tools don’t disappear, they extend our abilities. Tools originally extended our physical abilities – a hammer allows us to hit things harder, a pulley to lift heavier things. They make us more effective and allow us to do things a mere human couldn’t do alone. Computer technology can do a similar thing but for the human intellect…if we design them well.

“The most important thing the participants gained was a sense of wonderment at finding out all sorts of things and making connections through discovering aspects of the physical woodland (e.g., squirrel’s droppings, blackberries, thistles)”

– Yvonne Rogers

The Weiser dream was that technology invisibly watches the world and removes the obstacles in the way before you even notice them. It’s a little like the way servants to the aristocracy were expected to always have everything just right but at the same time were not to be noticed by those they served. The way this is achieved is to have technology constantly monitoring, understanding what is going on and how it might affect us and then calmly fixing things. The problem at the time was that it needs really ‘smart’ technology – a high level of Artificial Intelligence to achieve and that proved more difficult than anyone imagined (though perhaps we are now much closer than we were). Our behaviour and desires, however, are full of subtlety and much harder to read than was imagined. Even a super-intellect would probably keep getting it wrong.

There are also ethical problems. If we do ever achieve the dream of total calm we might not like it. It is very easy to be gung ho with technology and not realize the consequences. Calm computing needs monitors – the computer measuring everything it can so it has as much information as possible to make decisions from (see Big Sister is Watching You).

A classic example of how this can lead to people rejecting technology intended to help is in a project to make a ‘smart’ residential home for the elderly. The idea was that by wiring up the house to track the residents and monitor them the nurses would be able to provide much better care, and relatives be able to see how things were going. The place was filled with monitors. For example, sensors in the beds measured resident’s weight while they slept. Each night the occupants weight could invisibly be taken and the nurses alerted of worrying weight loss over time. The smart beds could also detect tossing and turning so someone having bad nights could be helped. A smart house could use similar technology to help you or I have a good nights sleep and help us diet.

The problem was the beds could tell other things too: things that the occupants preferred to keep to themselves. Nocturnal visitors also showed up in the records. That’s the problem if technology looks after us every second of the day, the records may give away to others far more than we are happy with.

Yvonne’s vision was different. It was not that the computers try to second-guess everything but instead extend our abilities. It is quite easy for new technology to lead to our being poorer intellectually than we were. Calculators are a good example. Yes, we can do more complex sums quickly now, but at the same time without a calculator many people can’t do the sums at all. Our abilities have both improved and been damaged at the same time. Generative AI seems to be currently heading the same way, What the probes do, instead, is extend our abilities not reduce them: allowing us to see the woods in a new way, but to use the information however we wish. The probes encourage imagination.

The alternative to the smart house (or calculator) that pampers allowing your brain to stay in neutral, or the residential home that monitors you for the sake of the nurses and your relatives, is one where the sensors are working for you. Where you are the one the bed reports to helping you to then make decisions about your health, or where the monitors you wear are (only) part of a game that you play because its fun.

What next? Yvonne suggested the same ideas could be used to help learning and exploration in other ways, understanding our bodies: “I’d like to see kids discover new ways of probing their bodies to find out what makes them tick.”

So if Yvonne’s vision is ultimately the way things turn out, you won’t be heading for a soporific future while the computer deals with real life for you. Instead it will be a future where the computers are sparking your imagination, challenging you to think, filling you with delight…and where the woods come alive again just as they do in the storybooks (and in the intelligent garden).

Paul Curzon, Queen Mary University of London

(adapted from the archive)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Brassed Off

The pit head of a colliery at sunset with a vivid red sky behind the setting sun
Image from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Brassed Off.

***SPOILER ALERT***

Brassed Off, starring Pete Postlethwaite, Tara Fitzgerald and Ewan McGregor, is set at a time when the UK coal and steel industries were being closed down with terrible effects on local communities across the North of England and Wales. It tells the story of the closing of the fictional Grimley Pit (based on the real mining village of Grimethorpe), from the point of view of the members of the colliery brass band and their families. The whole village relies on the pit for their livelihoods.

Danny, the band’s conductor is passionate about the band and wants to keep it going, even if the pit closes. Many of the other band members are totally despondent and just want to take the money that is on offer if they agree to the closure without a fight. They feel they have no future, and have given up hope over both the pit and the band (why have a colliery band if there is no colliery?)

Gloria, a company manager who grew up in the village arrives, conducting a feasibility study for the company to determine if the pit is profitable or not as justification for keeping it open or closing it down. A wonderful musician, she joins the band but doesn’t tell them that she is now management (including not telling her childhood boyfriend, and band member, Andy).

The story follows the battle to keep the pit open, and the effects on the community if it closes, through the eyes of the band members as they take part in a likely final ever brass band competition…

Brassed Off: with computer science

In our computer science film future version, the pit is still closing and Gloria is still management, but with a Computer Science PhD in digital music, she has built a flugelhorn playing robot with a creative AI brain. It can not only play brass band instruments but arrange and compose too. On arriving at Grimley she asks if her robot can join the band. Initially, every one is against the idea, but on hearing how good it is, and how it will help them do well in the national brass band competition they relent. The band, with robot, go all the way to the finals and ultimately win…

The pit, however, closes and there are no jobs, at all, not even low quality work in local supermarkets (automatic tills and robot shelf-stackers have replaced humans) or call centres (now replaced by chatbots). Gloria also loses her job due to a shake-out of middle managers as the AIs take over the knowledge economy jobs. Luckily, she is ok, as with university friends, she starts a company building robot musicians which is an amazing success. The band never make the finals again as bands full of Gloria’s flugelhorn and cornet playing robots take over (also taking the last of the band’s self-esteem). In future years, all the brass bands in the competition are robot bands as with all the pits closing the communities around them collapse. The world’s last ever flugelhorn player is a robot. Gloria and Andy never do get to kiss…

In real life…

Could a robot play a musical instrument? One existed centuries before the computer age. In 1737  Jacques de Vaucanson revealed his flute playing automaton to the public. A small human height figure, it played a real flute, that could be replaced to prove the machine could really play a real instrument. Robots have played various instruments, including drums and a cello playing robot that played with an orchestra in Malmo. While robot orchestras and bands are likely, it seems less likely that humans would stop playing as a result.

Can an AI compose music? Victorian, Ada Lovelace predicted they one day would, a century before the first computer was ever built. She realised that this would be the case just from thinking about the machines that Charles Babbage was trying to build. Her prediction eventually came true. Now of course, generative AI is being used to compose music, and can do so in any style, whether classical or pop. How good, or creative, it is may be debated but it won’t be long before they have super-human music composition powers.

So, a flugelhorn playing robot, that also composes music, is not a pipe dream!

What about the social costs that are the real theme of the film though? When the UK pits and steelworks closed whole communities were destroyed with great, and long lasting, social cost. It was all well and good for politicians to say there are new jobs being created by the new service and knowledge economy, but that was no help when no thought or money had actually been put in to helping communities make the transition. “Get on your bike” was their famous, if ineffective, solution. For example, if the new jobs were to be in technology as suggested then massive technology training programmes for those put out of work were needed, along with financial support in the meantime. Instead, whole communities were effectively left to rot and inequality increased massively. Areas in the North of England and Wales that had been the backbone of the UK economy, still haven’t really recovered 40 years later.

Are we about to make the same mistakes again? We are certainly arriving at a similar point, but now it is those knowledge economy jobs that were supposed to be the saviours 40 years ago that are under threat from AI. There may well be new jobs as old ones disappear…but even if they do will the people who lose their jobs be in a position to take the new ones, or are we heading towards a whole new lost generation. As back then, without serious planning and support, including successful efforts to reduce inequality in society, the changes coming could again cause devastation, this time much more widespread. As it stands technology is increasing, not decreasing, inequality. We need to start now, including coming up with a new economic model of how the world will work that actively reduces inequality in society. Many science fiction writers have written of utopian futures where people only work for fun (eg Arthur C Clarke’s classic “Childhood’s End” is one I’m reading at the moment), but that only happens if wealth is not sucked up by the lucky few. (In “Childhood’s End” it takes alien invaders to force out inequality.)

We can avoid a dystopian future, but only if we try…really hard.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

ELIZA: the first chatbot to fool people

Chatbots are now everywhere. You seemingly can’t touch a computer without one offering its opinion, or trying to help. This explosion is a result of the advent of what are called Large Language Models: sophisticated programs that in part copy the way human brains work. Chatbots have been around far longer than the current boom, though. The earliest successful one, called ELIZA, was, built in the 1960s by Joseph Weizenbaum, who with his Jewish family had fled Nazi Germany in the 1930s. Despite its simplicity ELIZA was very effective at fooling people into treating it as if it were a human.

Head thinking in a speech bubble
Image adapted from one by by Gerd Altmann from Pixabay

Weizenbaum was interested in human-computer interaction, and whether it could be done in a more human-like way than just by typing rigid commands as was done at the time. In doing so he set the ball rolling for a whole new metaphor for interacting with computers, distinct from typing commands or pointing and clicking on a desktop. It raised the possibility that one day we could control computers by having conversations with them, a possibility that is now a reality.

His program, ELIZA, was named after the character in the play Pygmalion and musical My Fair Lady. That Eliza was a working class women who was taught to speak with a posh accent gradually improving her speech, and part of the idea of ELIZA was that it could gradually improve based on its interactions. At core though it was doing something very simple. It just looked for known words in the things the human typed and then output a sentence triggered by that keyword, such as a transformation of the original sentence. For example, if the person typed “I’m really unhappy”, it might respond “Why are you unhappy?”.

In this way it was just doing a more sophisticated version of the earliest “creative” writing program – Christopher Strachey’s Love Letter writing program. Strachey’s program wrote love letters by randomly picking keywords and putting them into a set of randomly chosen templates to construct a series of sentences.

The keywords that ELIZA looked for were built into its script written by the programmer and each allocated a score. It found all the keywords in the person’s sentence but used the one allocated the highest score. Words like “I” had a high score so were likely to be picked if present. A sentence starting “I am …” can be transformed into a response “Why are you …?” as in the example above. to make this seem realistic, the program needed to have a variety of different templates to provide enough variety of responses, though. To create the response, ELIZA broke down the sentence typed into component parts, picked out the useful parts of it and then built up a new response. In the above example, it would have pulled out the adjective, “happy” to use in its output with the template part “Why are you …”, for example.

If no keyword was found, so ELIZA had no rule to apply, it could fall back on a memory mechanism where it stored details of the past statements typed by the person. This allowed it to go back to an earlier thing the person had said and use that instead. It just moved on to the next highest scoring keyword from the previous sentence and built a response based on that.

ELIZA came with different “characters” that could be loaded in to it with different keywords and templates of how to respond. The reason ELIZA gained so much fame was due to its DOCTOR script. It was written to behave like a psychotherapist. In particular, it was based on the ideas of psychologist Carl Rogers who developed “person-centred therapy”, where a therapist, for example, echos back things that the person says, always asking open-ended questions (never yes/no ones) to get the patient talking. (Good job interviewers do a similar thing!) The advantage of it “pretending” to be a psychotherapist like this is that it did not need to be based on a knowledge bank of facts to seem realistic. Compare that with say a chatbot that aims to have conversations about Liverpool Football Club. To be engaging it would need to know a lot about the club (or if not appear evasive). If the person asked it “Who do you think the greatest Liverpool manager was?” then it would need to know the names of some former Liverpool managers! But then you might want to talk about strikers or specific games or … A chatbot aiming to have conversations about any topic the person comes up with convincingly needs facts about everything! That is what modern chatbots do have: provided by them sucking up and organising information from the web, for example. As a psychotherapist, DOCTOR never had to come up with answers, and echoing back the things the person said, or asking open-ended questions, was entirely natural in this context and even made ti seem as though it cared about what the people were saying.

Because Eliza did come across as being empathic in this way, the early people it was trialled on were very happy to talk to it in an uninhibited way. Weizenbaum’s secretary even asked him to leave while she chatted with it, as she was telling it things she would not have told him. That was despite the fact, or perhaps partly because, she knew she was talking to a machine. Others were convinced they were talking to a person just via a computer terminal. As a result it was suggested at the time that it might actually be used as a psychotherapist to help people with mental illness!

Weizenbaum was clear though that ELIZA was not an intelligent program, and it certainly didn’t care about anyone, even if it appeared to be. It certainly would not have passed the Turing Test, set previously by Alan Turing that if a computer was truly intelligent people talking to it would be indistinguishable from a person in its answers. Switch to any knowledge-based topic and the ELIZA DOCTOR script would flounder!

ELIZA was also the first in a less positive trend, to make chatbots female because this is seen as something that makes men more comfortable. Weizenbaum chose a female character specifically because he thought it would be more believable as a supportive, emotional female. The Greek myth Pygmalion from which the play’s name derives is about a male sculptor falling in love with a female sculpture he had carved, that then comes to life. Again this fits a trend of automaton and robots in films and reality being modelled after women simply to provide for the whims of men. Weizenbaum agreed he had made a mistake, saying that his decision to name ELIZA after a woman was wrong because it reinforces a stereotype of women. The fact that so many chatbots have then copied this mistake is unfortunate.

Because of his experiences with ELIZA he went on to become a critic of Artificial Intelligence (AI). Well before any program really could have been called intelligent (the time to do it!), he started to think about the ethics of AI use, as well as of the use of computers more generally (intelligent or not). He was particularly concerned about them taking over human tasks around decision making. He particularly worried that human values would be lost if decision making was turned into computation, beliefs perhaps partly shaped by his experiences escaping Germany where the act of genocide was turned into a brutally efficient bureaucratic machine, with human values completely lost. Ultimately, he argued that computers would be bad for society. They were created out of war and would be used by the military as a a tool for war. In this, given, for example, the way many AI programs have been shown to have built in biases, never mind the weaponisation of social media, spreading disinformation and intolerance in recent times, he was perhaps prescient.

by Paul Curzon, Queen Mary University of London

Fun to do

If you can program why not have a go at writing an ELIZA-like program yourself….or perhaps a program that runs a job interview for a particular job based on the person specification for it.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Cover of CS4FN Issue 16 - Clean up your language

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

Turn Right in Tenejapa

Designing software that is inclusive for global markets is easy. All you have to do is get an AI to translate everything in the interface into multiple languages…or perhaps to do it properly it is harder than that! Not everyone thinks like you do.

Coloured arrows turning and pointing in lots of different directions on a curved surface
Image by Gerd Altmann from Pixabay

Suppose you are the successful designer of a satellite navigation system. You’ve made lots of money selling it in the UK and the US and are now ready to take on the world. You want to be inclusive. It should be natural and easy to use by all. You therefore aim to produce versions for every known language. It should be easy shouldn’t it. The basic system is fine. It can use satellite signals to work out where it is. You already have maps of everywhere based on Google Earth that you have been selling to the English Speakers. It can work out routes and gives perfectly good directions just as the user needs them – like “Turn Left 200 meters ahead”. It is already based on Unicode, the International standard for storing characters so can cope with characters from all languages. All you need to do now is get a team of translators to come up with the equivalent of the small number of phrases used by the device (which, of course will also involve switching units from eg meters to yards and the like, but that is easy for a computer) and add a language selection mechanism. You have thought of everything. Simple…

Not so simple, actually. You may need more than just translators, and you may need more than just to change the words. As linguists have discovered, for example, a third of known languages have no concept of left and right. Since language helps determine the way we think, that also suggests the people who speak those languages don’t use the concepts. “Turn right” is meaningless. It has no equivalent.

So how do such people give directions or otherwise describe positions. Well it turns out many use a method that for a long time some linguists suggested would never occur. Experiments have also shown that not only do they talk that way, but they also may think that way.

Take Tzeltal. It is spoken very widely in Mexico. A dialect of it that is spoken by about 15 000 people in the Indian community of Tenejapa has been studied closely by Stephen Levinson and Penelope Brown. It is a large area roughly covering one slope of a mountainous region. The language has no general notion of left or right. Unlike in European languages where we refer to directions based on the way we are facing (known as a relative frame of reference), in Tzeltal directions use what is known as an absolute frame of reference. It is as though they have a compass in their heads and do the equivalent of referring to North, South, East and West all the time. Rather than “The cup is to the left of the teapot”, they might say the equivalent of “The cup is North of the teapot”. How did this system arise? Well they don’t actually refer to North and South directly, but more like uphill and downhill, even when away from the mountain side: they subconsciously keep track of where uphill would be. So they are saying something more like “The cup is on the uphill side of the teapot”.

In Tenejapa they think diferently about direction too

Experiments have suggested they think differently too – Show Europeans a series of objects ordered so “pointing” to their left on a table, turn them through 180 degrees and ask them to order the same objects on the table in front of them, and they will generally put them “pointing” to their left. In experiments with native Tzeltal speakers and they tended to put them “pointing” to their right (Still pointing uphill or whatever). Similar things apply when they make gestures. Its not just the words they use that are different, it is the way they internally represent the world that differs.

So back to the drawing board with the navigation system. If you really want it to be completely natural for all, then for each language you need more than just translators. You need linguists who understand the way people think and speak about directions in each language. Then you will have to do more than just change the words the system outputs, but recode the navigation system to work the way they think. A natural system for the Tzeltal would need to keep track of the Tenejapan uphill and give directions relative to that.

It isn’t just directions of course, there are many ways that our language and cultures lead to us thinking and acting differently. Design metaphors are also used a lot in interactive systems but they only work if they fit their users’ culture. For example, things are often ordered left to right as that as the way we read…except who is we there? Not everyone reads left to right!

Writing software for International markets isn’t as easy as it seems. You have to have good knowledge not just of local languages but also differences in culture and deep differences in the way different people see the world… If you want to be an International success then you will be better at it if you work in a way that shows you understand and respect those from elsewhere.

by Paul Curzon, Queen Mary University of London, adapted from the archive

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Avoiding loneliness with StudyBuddy

A girl in a corner of a red space head on knees
Lonely Image by Foundry Co from Pixabay

University has always been a place where you make great friends for life. Social media means everyone can easily make as many online friends as they like, and ever more students go to university, meaning more potential friends to make. So surely things now are better than ever. And yet many students suffer from loneliness while at university. We somehow seem to have ever greater disconnection the more connections we make. Klara Brodahl realised there was a novel need here that no one was addressing well and decided to try to solve it for the final year project of her computer science degree. Her solution was StudyBuddy and with the support of an angel investor she has now set up a startup company and is rolling it out for real.

A loneliness epidemic

In the digital age, university students face an unexpected challenge—loneliness. Although they’re more “connected” than ever through social media and virtual interactions, the quality of these connections is often shallow. A 2023 study, for example, found that 92% of students in the UK feel lonely at some point during their university life. This “loneliness epidemic” has profound effects, contributing to issues like anxiety, depression, and struggling with their programme.

During her own university years, Klara Brodahl  had experienced first hand the challenge of forming meaningful friends in an environment where everyone seemed socially engaged online but weren’t always connected in real life. She soon discovered that it wasn’t just her but a shared struggle by students across the country. Inspired by this, she set out to write a program that would fill the void in student’s lives and bridge the gap between studying and social life.

Combatting loneliness in the real world

She came up with StudyBuddy: a mobile app designed to combat student loneliness by supporting genuine, in-person connections between university students, not just virtual ones. Her aim was that it would help students meet, study, and connect in real time and in shared spaces. 

She realised that technology does have the potential to strengthen social bonds, but how it’s designed and used makes all the difference. The social neuroscientist John Cacioppo has pointed out that using social media primarily as a destination in its own right often leaves people feeling distant and dissatisfied. However, when technology is designed to serve as a bridge to offline human engagement, it can reduce loneliness and improve well-being. StudyBuddy embodies this approach by encouraging students to connect in person rather than trying to replace meeting face-to-face.

Study together in the real world

Part of making this work is in having reasons to meet for real. Klara realised that the need to study, and the fact that doing this in groups rather than alone can help everyone do better, could provide the excuse for this. StudyBuddy, therefore, integrates study goals with social interaction, allowing friendships to form around shared academic interests—an ideal icebreaker for those who feel nervous in traditional social settings.

The app uses location-based technology to connect students for co-study sessions, making in-person meetings easy and natural. Through a live map, students can see where others are checked in nearby at study spots like libraries, cafes, or student common areas. They can join existing study groups or start their own. The app uses university ID verification to help ensure connections are built on a trusted network.

From idea to startup company

Klara didn’t originally plan for StudyBuddy to become a real company. Like many graduates, she thought starting a business was something to perhaps try later, once she had some professional experience from a more ‘normal’ graduate job. However, when the graduate scheme she won a place on after graduating was unexpectedly delayed, she found herself with time on her hands. Rather than do nothing she decided to keep working on the app as a side project. It was at this point that StudyBuddy caught the attention of an angel investor, whose enthusiasm for the app gave Klara the confidence to keep going.

When her graduate scheme finally began, she was therefore already deeply invested in StudyBuddy. Trying to manage both roles, she quickly realised she preferred the challenge and creativity of her startup work over the graduate scheme. And when it became impossible to balance both, she took a leap of faith, quitting her graduate job to focus on StudyBuddy full-time—a decision that has since paid off. She gained early positive feedback, ran a pilot at Queen Mary University of London, and won early funding for investors willing to invest in what was essentially still an idea, rather than a product with a known market. As a result StudyBuddy has gradually turned into a useful mission-driven platform, providing students with a safe, real-world way to connect.

Making a difference

StudyBuddy has the potential to transform the university experience by reducing loneliness and fostering authentic, in-person friendships. By rethinking what engagement in the digital age means, the app also serves as a model for how technology can promote meaningful social interaction more generally. Klara has shown that with thoughtful design, technology can be a powerful tool for bridging digital and physical divides, creating a campus environment where students thrive both academically and socially. Her experience also shows how the secret to being a great entrepreneur is to be able to see a human need that no one else has seen or solved well. Then, if you can come up with a creative solution that really solves that need, your ideas can become reality and really make a difference to people’s lives.

– Klara Brodahl, StudyBuddy and Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

Accessible Technology in the Voting Booth

CS4FN Banner

by Daniel Gill, Queen Mary University of London

Voting at an election: people deposting their voting slip
Image AI generated by Vilius Kukanauskas from Pixabay

On Thursday 4th July 2024, millions of adults around the UK went to their local polling station to vote for their representative in the House of Commons. However, for the 18% of adults who have a disability, this can be considerably more challenging. While the right of voters to vote independently and secretly is so important, many blind and partially sighted people cannot do so without assistance. Thankfully this is changing, and this election was hailed as the most accessible yet. So how does technology enable blind and partially sighted people to vote independently?

 There are two main challenges when it comes to voting for blind and partially sighted people. The names of candidates are listed down the left-hand side, so firstly, a voter needs to find the row of the person who they want to vote for. They then, secondly, need to put a cross in the box to the right. The image below gives an example of what the ballot paper looks like:

A mock up of a "CS4FN" voting slip with candidates
HOPPER, Grace
TURING, Alan Mathison
BENIOFF, Paul Anthony
Lovelace, Ada

To solve the first problem, we can turn to audio. An audio device can be used to play a recording of the candidates as the appear on the ballot paper. Some charities also provide a phone number to call before the election, with a person who can read this list out. This is great, of course, but it does rely on the voter remembering the position of the person that they want to vote for. A blind or partially sighted voted is also allowed to use a text reader device, or perhaps a smart phone with a special app, to read out what is on the ballot paper in the booth.

Lots of blind and partially impaired people are able to read braille: a way of representing English words using bumps on the paper (read more about braille in this CS4FN article). One might think that this would solve all the problems, but, in fact, there is a requirement that all the ballot papers for each constituency have a standard design to ensure they can be counted efficiently and without error.

The solution to the second problem is far more practical: the excitingly named tactile voting device. This is a simple plastic device which is placed on top of the ballot paper. Each of the boxes on the ballot paper (as shown to the right of the image above), has a flap above it with its position number embossed on it. When the voter finds the number of the person they want to vote for, they simply turn over the flap, and are guided by a perfectly aligned square guide to where the box is. The voter can then use that guide to draw the cross in the box.

This whole process is considerably more complicated than it is for those without disabilities – and you might be thinking, “there must be an easier way!” Introducing the McGonagle Reader (MGR)! This device combines both solutions into one device that can be used in the voting booth. Like the tactile voting device, it has flaps which cover each of the boxes for drawing the cross. But, next to those, buttons, which, when pressed, read out the information of the candidate for that row. This can save lots of time, removing the need to remember the position of each candidate – a voter can simply go down the page and find who they want to vote for and turn over the correct flap.

When people have the right to vote, it is especially important to ensure that they have the ability to use that right. This means that no matter the cost or the logistics, everyone should have access to the tools they need to vote for their representative. Progress is now being made but a lot more work still needs to be done.

To help ensure this happens in future, the RNIB want to know the experiences of those who voted or didn’t vote in the UK 2024 general election – see the survey linked from the RNIB page here.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

AMPER: AI helping future you remember past you

by Jo Brodie, Queen Mary University of London

Have you ever heard a grown up say “I’d completely forgotten about that!” and then share a story from some long-forgotten memory? While most of us can remember all sorts of things from our own life history it sometimes takes a particular cue for us to suddenly recall something that we’d not thought about for years or even decades. 

As we go through life we add more and more memories to our own personal library, but those memories aren’t neatly organised like books on a shelf. For example, can you remember what you were doing on Thursday 20th September 2018 (or can you think of a way that would help you find out)? You’re more likely to be able to remember what you were doing on the last Tuesday in December 2018 (but only because it was Christmas Day!). You might not spontaneously recall a particular toy from your childhood but if someone were to put it in your hands the memories about how you played with it might come flooding back.

Accessing old memories

In Alzheimer’s Disease (a type of dementia) people find it harder to form new memories or retain more recent information which can make daily life difficult and bewildering and they may lose their self-confidence. Their older memories, the ones that were made when they were younger, are often less affected however. The memories are still there but might need drawing out with a prompt, to help bring them to the surface.

Perhaps a newspaper advert will jog your memory in years to come… Image by G.C. from Pixabay

An EPSRC-funded project at Heriot-Watt University in Scotland is developing a tablet-based ‘story facilitator’ agent (a software program designed to adapt its response to human interaction) which contains artificial intelligence to help people with Alzheimer’s disease and their carers. The device, called ‘AMPER’*, could improve wellbeing and a sense of self in people with dementia by helping them to uncover their ‘autobiographical memories’, about their own life and experiences – and also help their carers remember them ‘before the disease’.

Our ‘reminiscence bump’

We form some of our most important memories between our teenage years and early adulthood – we start to develop our own interests in music and the subjects that we like studying, we might experience first loves, perhaps going to university, starting a career and maybe a family. We also all live through a particular period of time where we’re each experiencing the same world events as others of the same age, and those experiences are fitted into our ‘memory banks’ too. If someone was born in the 1950s then their ‘reminiscence bump’ will be events from the 1970s and 1980s – those memories are usually more available and therefore people affected by Alzheimer’s disease would be able to access them until more advanced stages of the disease process. Big important things that, when we’re older, we’ll remember more easily if prompted.

In years to come you might remember fun nights out with friends.
Image by ericbarns from Pixabay

Talking and reminiscing about past life events can help people with dementia by reinforcing their self-identity, and increasing their ability to communicate – at a time when they might otherwise feel rather lost and distressed. 

AMPER will explore the potential for AI to help access an individual’s personal memories residing in the still viable regions of the brain by creating natural, relatable stories. These will be tailored to their unique life experiences, age, social context and changing needs to encourage reminiscing.”

Dr Mei Yii Lim, who came up with the idea for AMPER(3).

Saving your preferences

AMPER comes pre-loaded with publicly available information (such as photographs, news clippings or videos) about world events that would be familiar to an older person. It’s also given information about the person’s likes and interests. It offers examples of these as suggested discussion prompts and the person with Alzheimer’s disease can decide with their carer what they might want to explore and talk about. Here comes the clever bit – AMPER also contains an AI feature that lets it adapt to the person with dementia. If the person selects certain things to talk about instead of others then in future the AI can suggest more things that are related to their preferences over less preferred things. Each choice the person with dementia makes now reinforces what the AI will show them in future. That might include preferences for watching a video or looking at photos over reading something, and the AI can adjust to shorter attention spans if necessary. 

Reminiscence therapy is a way of coordinated storytelling with people who have dementia, in which you exercise their early memories which tend to be retained much longer than more recent ones, and produce an interesting interactive experience for them, often using supporting materials — so you might use photographs for instance

Prof Ruth Aylett, the AMPER project’s lead at Heriot-Watt University(4).

When we look at a photograph, for example, the memories it brings up haven’t been organised neatly in our brain like a database. Our memories form connections with all our other memories, more like the branches of a tree. We might remember the people that we’re with in the photo, then remember other fun events we had with them, perhaps places that we visited and the sights and smells we experienced there. AMPER’s AI can mimic the way our memories branch and show new information prompts based on the person’s previous interactions.

​​Although AMPER can help someone with dementia rediscover themselves and their memories it can also help carers in care homes (who didn’t know them when they were younger) learn more about the person they’re caring for.

*AMPER stands for ‘Agent-based Memory Prosthesis to Encourage Reminiscing’.


Suggested classroom activities – find some prompts!

  • What’s the first big news story you and your class remember hearing about? Do you think you will remember that in 60 years’ time?
  • What sort of information about world or local events might you gather to help prompt the memories for someone born in 1942, 1959, 1973 or 1997? (Remember that their reminiscence bump will peak in the 15 to 30 years after they were born – some of them may still be in the process of making their memories the first time!).

See also

If you live near Blackheath in South East London why not visit the Age Exchange and reminiscence centre which is an arts charity providing creative group activities for those living with dementia and their carers. It has a very nice cafe.

Related careers

The AMPER project is interdisciplinary, mixing robots and technology with psychology, healthcare and medical regulation.

We have information about four similar-ish job roles on our TechDevJobs blog that might be of interest. This was a group of job adverts for roles in the Netherlands related to the ‘Dramaturgy^ for Devices’ project. This is a project linking technology with the performing arts to adapt robots’ behaviour and improve their social interaction and communication skills.

Below is a list of four job adverts (which have now closed!) which include information about the job description, the types of people that the employers were looking for and the way in which they wanted them to apply. You can find our full list of jobs that involve computer science directly or indirectly here.

^Dramaturgy refers to the study of the theatre, plays and other artistic performances.

Dramaturgy for Devices – job descriptions

References

1. Agent-based Memory Prosthesis to Encourage Reminiscing (AMPER) Gateway to Research
2. The Digital Human: Reminiscence (13 November 2023) BBC Sounds – a radio programme that talks about the AMPER Project.
3. Storytelling AI set to improve wellbeing of people with dementia (14 March 2022) Heriot-Watt University news
4. AMPER project to improve life for people with dementia (14 January 2022) The Engineer


EPSRC supports this blog through research grant EP/W033615/1.