If you go down to the woods today…

A girl walking through a meadow full of flowers within woods
Image by Jill Wellington from Pixabay

In the 2025 RHS Chelsea Flower Show there was one garden that was about technology as well as plants: The Avanade Intelligent Garden  exploring how AI might be used to support plants. Each of the trees contained probes that sensed and recorded data about them which could then be monitored through an App. This takes pioneering research from over two decades ago a step further, incorporating AI into the picture and making it mainstream. Back then a team led by Yvonne Rogers built an ambient wood aiming to add excitement to a walk in the woods...

Mark Weiser had a dream of ‘Calm Computing’ and while computing sometimes seems ever more frustrating to use, the ideas led to lots of exciting research that saw at least some computers disappearing into the background. His vision was driven by a desire to remove the frustration of using computers but also the realization that the most profound technologies are the ones that you just don’t notice. He wanted technology to actively remove frustrations from everyday life, not just the ones caused by computers. He wrote of wanting to “make using a computer as refreshing as taking a walk in the woods.”

Not calm, but engaging and exciting!

No one argues that computers should be frustrating to use, but Yvonne Rogers, then of the Open University, had a different idea of what the new vision could be. Not calm. Anything but calm in fact (apart from frustrating of course). Not calm, but engaging and exciting!

Her vision of Weiser’s tranquil woods was not relaxing but provocative and playful. To prove the point her team turned some real woods in Sussex into an ‘Ambient Wood’. The ambient wood was an enhanced wood. When you entered it you took probes with you, that you could point and poke with. They allowed you to take readings of different kinds in easy ways. Time hopping ‘Periscopes’ placed around the woods allowed you to see those patches of woodland at other times of the year. There was also a special woodland den where you could then see the bigger picture of the woods as all your readings were pulled together using computer visualisations.

Not only was the Ambient Wood technology visible and in your face but it made the invisible side of the wood visible in a way that provoked questions about the wildlife. You noticed more. You saw more. You thought more. A walk in the woods was no longer a passive experience but an active, playful one. Woods became the exciting places of childhood stories again but now with even more things to explore.

The idea behind the Ambient Wood, and similar ideas like Bristol’s Savannah project, where playing fields are turned into African Savannah, was to revisit the original idea of computers but in a new context. Computers started as tools, and tools don’t disappear, they extend our abilities. Tools originally extended our physical abilities – a hammer allows us to hit things harder, a pulley to lift heavier things. They make us more effective and allow us to do things a mere human couldn’t do alone. Computer technology can do a similar thing but for the human intellect…if we design them well.

“The most important thing the participants gained was a sense of wonderment at finding out all sorts of things and making connections through discovering aspects of the physical woodland (e.g., squirrel’s droppings, blackberries, thistles)”

– Yvonne Rogers

The Weiser dream was that technology invisibly watches the world and removes the obstacles in the way before you even notice them. It’s a little like the way servants to the aristocracy were expected to always have everything just right but at the same time were not to be noticed by those they served. The way this is achieved is to have technology constantly monitoring, understanding what is going on and how it might affect us and then calmly fixing things. The problem at the time was that it needs really ‘smart’ technology – a high level of Artificial Intelligence to achieve and that proved more difficult than anyone imagined (though perhaps we are now much closer than we were). Our behaviour and desires, however, are full of subtlety and much harder to read than was imagined. Even a super-intellect would probably keep getting it wrong.

There are also ethical problems. If we do ever achieve the dream of total calm we might not like it. It is very easy to be gung ho with technology and not realize the consequences. Calm computing needs monitors – the computer measuring everything it can so it has as much information as possible to make decisions from (see Big Sister is Watching You).

A classic example of how this can lead to people rejecting technology intended to help is in a project to make a ‘smart’ residential home for the elderly. The idea was that by wiring up the house to track the residents and monitor them the nurses would be able to provide much better care, and relatives be able to see how things were going. The place was filled with monitors. For example, sensors in the beds measured resident’s weight while they slept. Each night the occupants weight could invisibly be taken and the nurses alerted of worrying weight loss over time. The smart beds could also detect tossing and turning so someone having bad nights could be helped. A smart house could use similar technology to help you or I have a good nights sleep and help us diet.

The problem was the beds could tell other things too: things that the occupants preferred to keep to themselves. Nocturnal visitors also showed up in the records. That’s the problem if technology looks after us every second of the day, the records may give away to others far more than we are happy with.

Yvonne’s vision was different. It was not that the computers try to second-guess everything but instead extend our abilities. It is quite easy for new technology to lead to our being poorer intellectually than we were. Calculators are a good example. Yes, we can do more complex sums quickly now, but at the same time without a calculator many people can’t do the sums at all. Our abilities have both improved and been damaged at the same time. Generative AI seems to be currently heading the same way, What the probes do, instead, is extend our abilities not reduce them: allowing us to see the woods in a new way, but to use the information however we wish. The probes encourage imagination.

The alternative to the smart house (or calculator) that pampers allowing your brain to stay in neutral, or the residential home that monitors you for the sake of the nurses and your relatives, is one where the sensors are working for you. Where you are the one the bed reports to helping you to then make decisions about your health, or where the monitors you wear are (only) part of a game that you play because its fun.

What next? Yvonne suggested the same ideas could be used to help learning and exploration in other ways, understanding our bodies: “I’d like to see kids discover new ways of probing their bodies to find out what makes them tick.”

So if Yvonne’s vision is ultimately the way things turn out, you won’t be heading for a soporific future while the computer deals with real life for you. Instead it will be a future where the computers are sparking your imagination, challenging you to think, filling you with delight…and where the woods come alive again just as they do in the storybooks (and in the intelligent garden).

Paul Curzon, Queen Mary University of London

(adapted from the archive)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Designing an interactive prayer mat

Successful interactive systems design is often based on detecting a need that really good solutions do not yet exist for, then coming up with a realistic solution others haven’t thought of. The real key is then having the technical and design skill and perseverance to actually build it, as well as the perseverance to go through lots of rounds of prototyping to get it right. Even then it is still a long haul needing different people and business skills to end up with a successful product. Kamal Ali showed how its done with the development of My Salah Mat, an interactive prayer mat to help young children learn to pray.

A child in prayer bowing low to God in the direction of Mecca
Image by Samer Chidiac from Pixabay

He realised there was a need watching his 4-year old struggling to get his feet and hands, forehead and nose in the right place to pray: correctly bowing low to God in the direction of Mecca. Instead he kept lying on his tummy. Kamal’s first thought was to try and buy something that would help.

He searched for something suitable: perhaps a mat with the positions marked on in some child friendly way, and was surprised when he could find nothing. Thinking it was a good idea anyway, and with a background in product design, he set about creating a Photoshop prototype himself. One of the advantages of prototyping is that it encourages “design-by-doing” and just in doing that he had new ideas – children need help with the words of prayers too, so why not write them on the mat in child friendly ways. From there realising it could be interactive with buttons to press so it could read out instructions was the next step. After all young children may struggle with reading themselves: it is important to really know your users and what will and will not work for them!

As he was already running a company, he knew how to get a physical prototype made so after working on the idea with a friend he created the first one. From there there were lots more rounds of prototyping to get the look and feel right for young kids, for example, and to ensure it would fill their need really, really well.

He also focussed on the one clear group: of young children and designed for their need. Once that design was successful the company then developed a very different design based on the same idea for adult / reverts. That is an important interaction design lesson. Different groups of potential users may need different designs and trying to design one product for everyone may not end up working for anyone. Find a specific group and design really well for them!

In the process of creating the design Kamal started to wonder why he was doing it. He realised it was not to make money – he was really thinking of it as a social venture. It was not about profit but all about doing social good: as he has said:

” I finally realised that my motivation was to create a high quality product that could help children learn how to pray Salah. Most importantly, children would want to pray and interact with the different aspects of Salah. This was my true motivation and the most important thing to me.”

Great interactive system product design takes inspiration, skill and a lot of perseverance, but the real key is to be able to identify a real unfulfilled need, and come up with realistic solutions that both fill the need and people want. That is not just about having an idea, it is about doing rounds and rounds of prototyping and trial and error with people who will be the users to get the design right. If you do get it right and you can do all sorts of good.

by Paul Curzon, Queen Mary University of London.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Oh no! Not again…

What a mess. There’s flour all over the kitchen floor. A fortnight ago I opened the cupboard to get sugar for my hot chocolate. As I pulled out the sugar, it knocked against the bag of flour which was too close to the edge… Luckily the bag didn’t burst and I cleared it up quickly before anyone found out. Now it’s two weeks later and exactly the same thing just happened to my brother. This time the bag did burst and it went everywhere. Now he’s in big trouble for being so clumsy!

Flour cascading everywhere
Cropped image of that by Anastasiya Komarova from Pixabay
Image by Anastasiya Komarova from Pixabay

In safety-critical industries, like healthcare and the airline industry, especially, it is really important that there is a culture of reporting incidents including near misses. It also, it turns out, is important that the mechanisms of reporting issues is appropriately designed, and that there is a no blame culture especially so that people are encouraged to report incidents and do so accurately and without ambiguity.

Was the flour incident my brother’s fault? Should he have been more careful? He didn’t choose to put the sugar in a high cupboard with the flour. Maybe it was my fault? I didn’t choose to put the sugar there either. But I didn’t tell anyone about the first time it happened. I didn’t move the sugar to a lower cupboard so it was easier to reach either. So maybe it was my fault after all? I knew it was a problem, and I didn’t do anything about it. Perhaps thinking about blame is the wrong thing to do!

Now think about your local hospital.

James is a nurse, working in intensive care. Penny is really ill and is being given insulin by a machine that pumps it directly into her vein. The insulin is causing a side effect though – a drop in blood potassium level – and that is life threatening. They don’t have time to set up a second pump, so the doctor decides to stop the insulin for a while and to give a dose of potassium through a second tube controlled by the same pump. James sets up the bag of potassium and carefully programs the pump to deliver it, then turns his attention to his next task. A few minutes later, he glances at the pump again and realises that he forgot to release the clamp on the tube from the bag of potassium. Penny is still receiving insulin, not the potassium she urgently needs. He quickly releases the clamp, and the potassium starts to flow. An hour later, Penny’s blood potassium levels are pretty much back to normal: she’s still ill, but out of danger. Phew! Good job he noticed in time and no-one else knows about the mistake!

Two weeks later, James’ colleague, Julia, is on duty. She makes a similar mistake treating a different patient, Peter. Except that she doesn’t notice her mistake until the bag of insulin has emptied. Because it took so long to spot, Peter needs emergency treatment. It’s touch-and-go for a while, but luckily he recovers.

Julia reports the incident through the hospital’s incident reporting system, so at least it can be prevented from happening again. She is wracked with guilt for making the mistake, but also hopes fervently that she won’t be blamed and so punished for what happened

Don’t miss the near misses

Why did it happen? There are a whole bunch of problems that are nothing to do with Julia or James. Why wasn’t it standard practice to always have a second pump set up for critically ill patients in case such emergency treatment is needed? Why can’t the pump detect which bag the fluid is being pumped from? Why isn’t it really obvious whether the clamp is open or closed? Why can’t the pump detect it. If the first incident – a ‘near miss’ – had been reported perhaps some of these problems might have been spotted and fixed. How many other times has it happened but not reported?

What can we learn from this? One thing is that there are lots of ways of setting up and using systems, and some may well make them safer. Another is that reporting “near misses” is really important. They are a valuable source of learning that can alert other people to mistakes they might make and lead to a search for ways of making the system safer, perhaps by redesigning the equipment or changing the way it is used, for example – but only if people tell others about the incidents. Reporting near-misses can help prevent the same thing happening again.

The above was just a story, but it’s based on an account of a real incident… one that has been reported so it might just save lives in the future.

Report it!

The mechanisms used to do it, as well as culture around reporting incidents can make a big difference to whether incidents are reported. However, even when incidents are reported, the reporting systems and culture can help or hinder the learning that results.

Chrystie Myketiak at Queen Mary analysed actual incident reports for the kind of language used by those writing them. She found that the people doing the reporting used different strategies in they way they wrote the reports depending on the kind of incident it was. In situations where there was no obvious implication that a person made a mistake (such as where sterilization equipment had not successfully worked) they used one kind of language. Where those involved were likely to be seen to be responsible, so blamed, (eg when a wrong number had been entered in a medical device, for example) they used a different kind of language.

In the former, where “user errors” might have been involved, those doing the reporting were more likely to write in a way that hid the identity of any person involved, eg saying “The pump was programmed” or writing about he or she rather than a named person. They were also more likely to write in a way that added ambiguity. For example, in the user error reports it was less clear whether the person making the report was the one involved or whether someone else was writing it such as a witness or someone not involved at all.

Writing in the kinds of ways found, and the fact that it differed to those with no one likely to be blamed, suggests that those completing the reports were aware that their words might be misinterpreted by those who read them. The fact that people might be blamed hung over the reporting.

The result of adding what Christie called “precise ambiguity” might mean important information was inadvertently concealed making it harder to understand why the incident happened so work out how best to avoid it. As a result, patient safety might then not be improved even though the incident was reported. This shows one of the reasons why a strong culture of no-fault reporting is needed if a system is to be made as safe as possible. In the airline industry, which is incredibly safe, there is a clear system of no fault reporting, with pilots, for example, being praised for reporting near-misses of plane crashes rather than being punished for any mistake that led to the near miss.

This work was part of the EPSRC funded CHI+MED research project led by Ann Blandford at UCL looking at design for patient safety. In separate work on the project, Alexis Lewis, at Swansea University, explored how best to design the actual incident reporting forms as part of her PhD. A variety of forms are used in hospitals across the UK and she examined more than 20 different ones. Many had features that would make it harder than necessary for nurses and doctors to report incidents accurately even if they wanted to openly so that hospital staff would learn as much as possible from the incidents that did happen. Some forms failed to ask about important facts and many didn’t encourage feedback. It wasn’t clear how much detail or even what should be reported. She used the results to design a new reporting form that avoided the problems and that could be built into a system that encourages the reporting of incidents . Ultimately her work led to changes to the reporting form and process used within at least one health board she was working with.

People make mistakes, but safety does not come from blaming those that make them. That just discourages a learning culture. To really improve safety you need to praise those that report near misses, as well as ensuring the forms and mechanisms they must use to do so helps them provide the information needed.

Updated from the archive, written by the CHI+MED team.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

My first signs

Alexander Graham Bell was inspired by the deafness of his mother to develop new technologies to help. Lila Harrar, then a computer science student at Queen Mary, University of London was also inspired by a deaf person to do something to make a difference. Her chance came when she had to think of something to do for her undergraduate project.

Sign language relief sculpture on a stone wall: "Life is beautiful, be happy and love each other", by Czech sculptor Zuzana Čížková on Holečkova Street in Prague-Smíchov, by a school for the deaf
Sign language relief sculpture on a stone wall: “Life is beautiful, be happy and love each other”, by Czech sculptor Zuzana Čížková on Holečkova Street in PragueSmíchov, by a school for the deaf
Image (cropped) ŠJů, Wikimedia Commons, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Her inspiration came from working with a deaf colleague in a part-time job on the shop floor at Harrods. The colleague often struggled to communicate to customers so Lila decided to do something to encourage hearing as well as deaf people to learn Sign Language. She developed an interactive tutor program that teaches both deaf and non-deaf users Sign Language. Her software included games and quizzes along with the learning sections- and she caught the attention of the company Microbooks. They were so impressed that they decided to commercialise it. As Lila discovered you need both creativity and logical thinking skills to do well at Computer Science – with both, together with a bit of business savvy, perhaps you could become the country’s next great innovator.

– Peter W. McOwan and Paul Curzon, Queen Mary University of London

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

QMUL CS4FN EPSRC logos

Testing AIs in Minecraft

by Paul Curzon, Queen Mary University of London

A complex Minecraft world with a lake, grasslands, mountains and a building
Image by allinonemovie from Pixabay

What makes a good environment for child AI learning development? Possibly the same as for human child learning development: Minecraft.

Lego is one of the best games to play for impactful learning development for children. The word Lego is based on the words Play and Well in Danish. In the virtual world, Minecraft has of course taken up the mantle. A large part of why they are wonderful games is because they are open-ended and flexible. There are infinite possibilities over what you can build and do. They therefore help encourage not just focussing on something limited to learn as many other games do, but support open-ended creativity and so educational development. Given how positive it can be for children, it shouldn’t be surprising that Minecraft is now being used to help AIs develop too.

Games have long been used to train and test Artificial Intelligence programs. Early programs were developed to play and ultimately beat humans at specific games like Checkers, Chess and then later Go. That mastered they started to learn to play individual arcade games as a way to extend their abilities. A key part of our intelligence is flexibility though, we can learn new games. Aiming to copy this, the AIs were trained to follow suit and so became more flexible, and showed they could learn to play multiple arcade games well. 

This is still missing a vital part of our flexibility though. The thing about all these games is that the whole game experience is designed to be part of the game and so the task the player has to complete. Everything is there for a reason. It is all an integral part of the game. There are no pieces at all in a chess game that are just there to look nice and will never, ever play a part in winning or losing. Likewise all the rules matter. When problem solving in real life, though, most of the world, whether objects, the way things behave or whatever, is not there explicitly to help you solve the problem. It is not even there just to be a designed distractor. The real world also doesn’t have just a few distractors, it has lots and lots. Looking round my living room, for example, there are thousands of objects, but only one will help me turn on the tv.

AIs that are trained on games may, therefore, just become good at working in such unreal environments. They may need to be told what matters and what to ignore to solve problems. Real problems are much more messy, so put them in the real world, or even a more realistic virtual world, to problem solve and they may turn out to be not very clever at all. Tests of their skills that are based on such tasks may not really test them at all.

Researchers at the University of Witwatersrand in South Africa decided to tackle this issue, but using yet another game: Minecraft.  Because Minecraft is an open-ended virtual world, tackling challenges created in it will involve working in a world that is much more than just about the problem itself. The Witwatersrand team’s resulting MinePlanner system is a collection of 45 challenges, some easy, some harder. They include gathering tasks (like finding and gathering wood) and building tasks (like building a log cabin), as well as tasks that include combinations of these things. Each comes in three versions. In the easy version nothing is irrelevant. The medium version contains a variety of extraneous things that are not at all useful to the task. The hard version is in a full Minecraft world where there are thousands of objects that might be used.

To tackle these challenges an AI (or human) needs to solve not just the complex problem set, but also work out for themselves what in the Minecraft world is relevant to the task they are trying to perform and what isn’t. What matters and what doesn’t?

The team hope that by setting such tests they will help encourage researchers to develop more flexible intelligences, taking us closer to having real artificial intelligence. The problems are proposed as a benchmark for others to test their AIs against. The Witwatersrand team have already put existing state-of-the-art AI planning systems to the test. They weren’t actually that great at solving the problems and even the best could not complete the harder tasks.

So it is back to school for the AIs but hopefully now they will get a much better, flexible and fun education playing games like Minecraft. Let’s just hope the robots get to play with Lego too, so they don’t get left behind educationally.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Chatbot or Cheatbot?

CS4FN Banner

by Paul Curzon, Queen Mary University of London

Speech bubbles
Image by Clker-Free-Vector-Images from Pixabay
IImage by Clker-Free-Vector-Images from Pixabay 

The chatbots have suddenly got everyone talking, though about them as much as with them. Why? Because one, chatGPT has (amongst other things) reached the level of being able to fool us into thinking that it is a pretty good student.

It’s not exactly what Alan Turing was thinking about when he broached his idea of a test for intelligence for machines: if we cannot tell them apart from a human then we must accept they are intelligent. His test involved having a conversation with them over an extended period before making the decision, and that is subtly different to asking questions.

ChatGPT may be pretty close to passing an actual Turing Test but it probably still isn’t there yet. Ask the right questions and it behaves differently to a human. For example, ask it to prove that the square root of 2 is irrational and it can do it easily, and looks amazingly smart, – there are lots of versions of the proof out there that it has absorbed. It isn’t actually good at maths though. Ask it to simply count or add things and it can get it wrong. Essentially, it is just good at determining the right information from the vast store of information it has been trained on and then presenting it in a human-like way. It is arguably the way it can present it “in its own words” that makes it seem especially impressive.

Will we accept that it is “intelligent”? Once it was said that if a machine could beat humans at chess it would be intelligent. When one beat the best human, we just said “it’s not really intelligent – it can only play chess””. Perhaps chatGPT is just good at answering questions (amongst other things) but we won’t accept that as “intelligent” even if it is how we judge humans. What it can do is impressive and a step forward, though. Also, it is worth noting other AIs are better at some of the things it is weak at – logical thinking, counting, doing arithmetic, and so on. It likely won’t be long before the different AIs’ mistakes and weaknesses are ironed out and we have ones that can do it all.

Rather than asking whether it is intelligent, what has got everyone talking though (in universities and schools at least) is that chatGPT has shown that it can answer all sorts of questions we traditionally use for tests well enough to pass exams. The issue is that students can now use it instead of their own brains. The cry is out that we must abandon setting humans essays, we should no longer ask them to explain things, nor for that matter write (small) programs. These are all things chatGPT can now do well enough to pass such tests for any student unable to do them themselves. Others say we should be preparing students for the future so its ok, from now on, we just only test what human and chatGPT can do together.

It certainly means assessment needs to be rethought to some extent, and of course this is just the start: the chatbots are only going to get better, so we had better do the thinking fast. The situation is very like the advent of calculators, though. Yes, we need everyone to learn to use calculators. But calculators didn’t mean we had to stop learning how to do maths ourselves. Essay writing, explaining, writing simple programs, analytical skills, etc, just like arithmetic, are all about core skill development, building the skills to then build on. The fact that a chatbot can do it too doesn’t mean we should stop learning and practicing those skills (and assessing them as an inducement to learn as well as a check on whether the learning has been successful). So the question should not be about what we should stop doing, but more about how we make sure students do carry on learning. A big, bad thing about cheating (aside from unfairness) is that the person who decides to cheat loses the opportunity to learn. Chatbots should not stop humans learning either.

The biggest gain we can give a student is to teach them how to learn, so now we have to work out how to make sure they continue to learn in this new world, rather than just hand over all their learning tasks to the chatbot to do. As many people have pointed out, there are not just bad ways to use a chatbot, there are also ways we can use chatbots as teaching tools. Used well by an autonomous learner they can act as a personal tutor, explaining things they realise they don’t understand immediately, so becoming a basis for that student doing very effective deliberate learning, fixing understanding before moving on.

Of course, a bigger problem, if a chatbot can do things at least as well as we can then why would a company employ a person rather than just hire an AI? The AIs can now a lot of jobs we assumed were ours to do. It could be yet another way of technology focussing vast wealth on the few and taking from the many. Unless our intent is a distopian science fiction future where most humans have no role and no point, (see for example, CS Forester’s classic, The Machine Stops) then we still in any case ought to learn skills. If we are to keep ahead of the AIs and use them as a tool not be replaced by them, we need the basic skills to build on to gain the more advanced ones needed for the future. Learning skills is also, of course, a powerful way for humans (if not yet chatbots) to gain self-fulfilment and so happiness.

Right now, an issue is that the current generation of chatbots are still very capable of being wrong. chatGPT is like an over confident student. It will answer anything you ask, but it gives wrong answers just as confidently as right ones. Tell it it is wrong and it will give you a new answer just as confidently and possibly just as wrong. If people are to use it in place of thinking for themselves then, in the short term at least, they still need the skill it doesn’t have of judging when it is right or wrong.

So what should we do about assessment. Formal exams come back to the fore so that conditions are controlled. They make it clear you have to be able to do it yourself. Open book online tests that become popular in the pandemic, are unlikely to be fair assessments any more, but arguably they never were. Chatbots or not they were always too easy to cheat in. They may well be good still for learning. Perhaps in future if the chatbots are so clever then we could turn the Turing test around: we just ask an artificial intelligence to decide whether particular humans (our students) are “intelligent” or not…

Alternatively, if we don’t like the solutions being suggesting about the problems these new chatbots are raising, there is now another way forward. If they are so clever, we could just ask a chatbot to tell us what we should do about chatbots…

.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.