AI owes us an explanation

Question mark and silhouette of hands holding a smart phone with question mark
Image by Chen from Pixabay

Why should AI tools explain why? Erhan Pisirir and Evangelia Kyrimi, researchers ar Queen Mary University of London explain why.

From the moment we start talking, we ask why. A three-year-old may ask fifty “whys” a day. ‘Why should I hold your hand when we cross the road?’ ‘Why do I need to wear my jacket?’ Every time their parent provides a reason, the toddler learns and makes sense of the world a little bit more.

Even when we are no longer toddlers trying to figure out why the spoon falls on the ground and why we should not touch the fire, it is still in our nature to question the reasons. The decisions and the recommendations given to us have millions of “whys” behind them. A bank might reject our loan application. A doctor might urge us to go to hospital for more tests. And every time, our instinct is to ask the same question: Why? We trust advice more when we understand it.

Nowadays the advice and recommendations come not only from other humans but also from computers with artificial intelligence (AI), such as a bank’s computer systems or health apps.  Now that AI systems are giving us advice and making decisions that affect our lives, shouldn’t they also explain themselves?

That’s the promise of Explainable AI: building machines that can explain their decisions or recommendations. These machines must be able to say what is decided, but also why, in a way we can understand.

From trees to neurons

For decades we have been trying to make machines think for us. A machine does not have the thinking, or the reasoning, abilities of humans. So we need to give instructions on how to think. When computers were less capable, these instructions were simpler. For example, it could look like a tree: think of a tree where each branch is a question with several possible answers, and each answer creates a new branch. Do you have a rash? Yes Do you have a temperature? Yes. Do you have nausea? Yes. Are the spots purple? Yes. If you push a glass against them do they fade away? No …  Go to the hospital immediately.

The tree of decisions naturally gives whys connected to the tips of the paths taken: You should go to the hospital because your collection of symptoms: having a rash of purple spots, a temperature and nausea and especially because they do not fade under a glass, mean that it is likely you have Meningitis. Because it is life-threatening and can get worse very quickly, you need to get to a hospital urgently. An expert doctor can check reasoning like this and decide whether that explanation is actually good reasoning about whether someone has Meningitis or not, or more to the point should rush to the hospital.

Humans made computers much more capable of more complex tasks over time. With this, their thinking instructions became more complex too. Nowadays they might look like more complicated networks instead of trees with branches. They might look like a network of neurons in a human brain, for example. These complex systems make computers great at answering more difficult questions successfully. But unlike looking at a tree of decisions, humans cannot understand how the computer reaches its final answer at a glance of its system of thinking anymore. It is no longer the case that following a simple path of branches through a decision tree gives a definite answer, never mind a why. Now there are loops and backtracks, splits and joins, and the decisions depend on weightings of answers not just a definite Yes or No. For example, with Meningitis, according to the NHS website, there are many more symptoms than above and they can appear in any order or not at all. There may not even be a rash, or the rash may fade when pressure is applied. It is complicated and certainly not as simple as our decision tree suggests (the NHS says “Trust your instincts and and do not wait for all the symptoms to appear or until a rash develops. You should get medical help immediately if you’re concerned about yourself or your child.”) Certainly, the situation is NOT simple enough to say from a decision tree, for example, “Do not worry, you do not have Meningitis because your spots are not purple and did fade in the glass test”. An explanation like that could kill someone. The decision has to be made from a complex web of inter-related facts. AI tools require you to just  trust their instincts!

Let us, for a moment, forget about branches and networks, and imagine that AI is a magician’s hat: something goes in (a white handkerchief) and something else at the tap of a wand magically pops out (a white rabbit).  With a loan application, for example, details such as your age, income, or occupation go in, and a decision comes out: approved or rejected.

Inside the magician’s hat

Nowadays researchers are trying to make the magician’s hat transparent so that you can have a sneak peek of what is going on in there (it shouldn’t seem like magic!). Was the rabbit in a secret compartment, did the magician move it from the pocket and put it in at the last minute or did it really appear out of nowhere (real magic)? Was the decision based on your age or income, or was it influenced by something that should be irrelevant like the font choice in your application?

Currently, explainable AI methods can answer different kinds of questions (though, not always effectively):

  • Why: Your loan was approved because you have a regular income record and have always paid back loans in the past.
  • Why not: Your loan application was rejected because you are 20 years old and are still a student,
  • What if: If you earned £1000 or more each month, your loan application would not have been rejected.

Researchers are inventing many different ways to give these explanations: for example, heat maps that highlight the most important pixels in an image, lists of pros and cons that show the factors for and against a decision, visual explanations such as diagrams or highlights, or natural-language explanations that sound more like everyday conversations.

What explanations are good for

The more interactions people have with AI, the more we see why AI explanations are important. 

  • Understanding why AI made a specific recommendation helps people TRUST the system more; for example, doctors (or patients) might want to know why AI flagged a tumour before acting on its advice. 
  • The explanations might expose if AI recommendations have discrimination and bias, increasing FAIRNESS. Think about the loan rejection scenario again, what if the explanation shows that the reason of AI’s decision was your race? Is that fair?
  • The explanations can help researchers and engineers with DEBUGGING, helping them understand and fix problems with AI faster.
  • AI explanations are also becoming more and more required by LAW. The General Data Protection Regulation (GDPR) gives people a “right to explanation” for some automated decisions, especially for high stake areas, such as healthcare and finance. 

The convincing barrister

One thing to keep in mind is that the presence of explanations does not automatically make an AI system perfect. Explanations themselves can be flawed. The biggest catch is when an explanation is convincing when it shouldn’t be. Imagine a barrister with charming social skills who can spin a story and let a clearly guilty client go free from charge. The AI explanations should not aim to be blindly convincing whether the AI is right or wrong. In the cases AI got it all wrong (and from time to time it will), the explanations should make this clear rather than falsely reassuring the human.

The future 

Explainable AI isn’t an entirely new concept. Decades ago, early expert systems in medicine already included “why” buttons to justify their advice. But only in recent years explainable AI has become a major trend, because of AI systems becoming more powerful and with the increase of concerns about AI surpassing human decision-making but potenitally making some bad decisions.

Researchers are now exploring ways to make explanations more interactive and human friendly, similarly to how we can ask questions to ChatGPT like ‘what influenced this decision the most?’ or ‘what would need to change for a different outcome?’ They are trying to tailor the explanation’s content, style and representation to the users’ needs.

So next time AI makes a decision for you, ask yourself: could it tell me why? If not, maybe it still has some explaining to do.

Erhan Pisirir and Evangelia Kyrimi, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Leave a comment