Mixing Research with Entrepreneurship: Find a need and solve it

A mixing desk
Image by Ida from Pixabay

Becoming a successful entrepreneur often starts with seeing a need: a problem someone has that needs to be fixed. For David Ronan, the need was for anyone to mix and master music but the problem was that of how hard it is to do this. Now his company RoEx is fixing that problem by combining signal processing ans artificial intelligence tools applied to music. It is based on his research originally as a PhD student

Musicians want to make music, though by “make music” they likely mean playing or composing music. The task of fiddling with buttons, sliders and dials on a mixing desk to balance the different tracks of music may not be a musician’s idea of what making music is really about, even though it is “making music” to a sound engineer or producer. However, mixing is now an important part of the modern process of creating professional standard music.

This is in part a result of the multitrack record revolution of the 1960s. Multitrack involves recording different parts of the music as different tracks, then combining them later, adding effects, combining them some more … George Martin with the Beatles pioneered its use for mainstream pop music in the 1960s and the Beach Boys created their unique “Pet Sounds” through this kind of multitrack recording too. Now, it is totally standard. Originally, though, recording music involved running a recording machine while a band, orchestra and/or singers did their thing together. If it wasn’t good enough they would do it all again from the beginning (and again, and again…). This is similar to the way that actors will act the same scene over and over dozens of times until the director is happy. Once happy with the take (or recording) that was basically it and they moved on to the next song to record.

With the advent of multitracking, each musician could instead play or sing their part on their own. They didn’t have to record at the same time or even be in the same place as the separate parts could be mixed together into a single whole later. Then it became the job of engineers and the producer to put it all together into a single whole. Part of this is to adjust the levels of each track so they are balanced. You want to hear the vocals, for example, and not have them drowned out by the drums. At this point the engineer can also fix mistakes, cutting in a rerecording of one small part to replace something that wasn’t played quite right. Different special effects can also be applied to different tracks (playing one track at a different speed or even backwards, with reverb or auto-tuned, for example). You can also take one singer and allow them to sing with multiple versions of themselves so that they are their own backing group, and are singing layered harmonies with themselves. One person can even play all the separate instruments as, for example, Prince often did on his recordings. The engineers and producer also put it all together and create the final sound, making the final master recording. Some musicians, like Madonna, Ariana Grande and Taylor Swift do take part in the production and engineering parts of making their records or even take over completely, so they have total control of their sound. It takes experience though and why shouldn’t everyone have that amount of creative control?

Doing all the mixing, correction and overdubbing can be laborious and takes a lot of skill, though. It can be very creative in itself too, which is why producers are often as famous as the artists they produce (think Quincy Jones or  Nile Rogers, for example). However, not everyone wanting to make their own music is interested in spending their time doing laborious mixing, but if you don’t yet have the skill yourself and cant afford to pay a producer what do you do? 

That was the need that David spotted. He wanted to do for music what instagram filters did for images, and make it easy for anyone to make and publish their own professional standard music. Based in part on his PhD research he developed tools that could do the mixing, leaving a musician to focus on experimenting with the sound itself.

David had spent several years leading the research team of an earlier startup he helped found called AI Music. It worked on adaptive music: music that changes based on what is happening around it, whether in the world or in a video game being played. It was later bought by Apple. This was the highlight of his career to that point and it helped cement his desire to continue to be an innovator and entrepreneur. 

With the help of Queen Mary, where he did his PhD, he therefore decided to set up his new company RoEx. It provides an AI driven mixing and mastering service. You choose basic mixing options as well as have the ability to experiment with different results, so still have creative control. However, you no longer need expensive equipment, nor need to build the skills to use it. The process becomes far faster too. Mixing your music becomes much more about experimenting with the sound: the machine having taken over the laborious parts, working out the optimum way to mix different tracks and produce a professional quality master recording at the end.

David  didn’t just see a need and have an idea of how to solve it, he turned it into something that people want to use by not only developing the technology, but also making sure he really understood the need. He worked with musicians and producers through a long research and development process to ensure his product really works for any musician.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …


Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The logic of Queens

A corner of a Queens Puzzle
A corner of a Queens Puzzle: Image by Paul Curzon

Queens is a fairly simple kind of logic puzzle found for example on LinkedIn as a way to draw you back to the site. Doing daily logic puzzles is good both for mental health and to build logical thinking skills. As with programming, solving logic puzzles is mostly about pattern matching (also a useful skill to practice daily) rather than logic per se. The logic mainly comes in working out the patterns.

Let’s explore this with Queens. The puzzle has simple rules. The board is divided into coloured territories and you must place a Queen in each territory. However, no two Queens can be in the same row or column. Also no two Queens can be adjacent, horizontally, vertically or diagonally.

If we were just to use pure logic on these puzzles we would perhaps return to the rules themselves constantly to try and deduce where Queens go. That is perhaps how novices try to solve puzzles (and possibly get frustrated and give up). Instead, those who are good at puzzles create higher level rules that are derived from the basic rules. Then they apply (ie pattern match against) the new rules whenever the situation applies. As an aside this is exactly how I worked when using machine-assisted proof to prove that programs and hardware correctly met their specification, doing research into better ways to ensure the critical devices we create are correct.

Let’s look at an example from Queens. Here is a puzzle to work on. Can you place the 8 Queens?

An initial Queens puzzle - an 8x8 grid with 8 territories marked out
mage by Paul Curzon
The same puzzle with squares in one column ruled out as places for a Queen
mage by Paul Curzon

Where to start? Well notice the grey territory near the bottom. It is a territory that lives totally in one column. If we go to the rules of Queens we know that there must be a Queen in this territory. That means that Queen must be in that column. We also know that only one Queen can be in a column. That means none of the other territories in that column can possibly hold a Queen there. We can cross them all out as shown.

In effect we have created a new derived inference rule.

IF a territory only has squares available in one column 

THEN cross out all squares of other territories in that column

By similar logic we can create a similar rule for rows.

Now we can just pattern match against the situation described in that rule. If ever you see a territory contained completely in a row or column, you can cross out everything else in that row/column.

In our case in doing that it creates new situations that match the rule. You may also be able to work out other rules. One obvious new rule is the following:

IF a territory only has one free space left and no Queens 

THEN put a Queen in that free space
The same puzzle with squares in two more columns ruled out as places for two more Queens
mage by Paul Curzon

We can derive more complicated rules too. For example, we can generalise our first rule to two columns. Can you find a pair of territories that reside in the same two columns only? There is such a pair in the top right corner of our puzzle. If there is such a situation then as both must have a Queen, between them they must be the territories that provide the Queens for both those two columns. That means we can cross out all the squares from other territories in those two columns. We get the rule:

IF two territories only have squares available in two columns

THEN cross out all squares of other territories in both columns

Becoming good at Queens puzzles is all about creating more of these rules that quickly allow you to make progress in all situations. As you apply rules, new rules become applicable until the puzzle is solved.

Can you both apply these rules and if need be derive some more to pattern match your way to solving this puzzle?

It turns out that programming is a lot like this too. For a novice, writing code is a battle with the details of the semantics (the underlying logical meaning) of the language finding a construct that does what is needed. The more expert you become the more you see patterns where you have a rule you can apply to provide the code solution: IF I need to do this repeatedly counting from 1 to some number THEN I use a for loop like this… IF I have to process a 2 dimensional matrix of possibilities THEN I need a pair of nested for loops that traverse it by rows and columns… IF I need to do input validation THEN I need this particular structure involving a while loop… and so on.

Perhaps more surprisingly, research into expert behaviour suggests that is what all expert behaviour boils down to. Expert intuition is all about subconscious pattern matching for situations seen before turned into subconscious rules whether expert fire fighters or expert chess players. Now machine learning AIs are becoming experts at things we are good at. Not suprisingly, what machine learning algorithms are good at is spotting patterns to drive their behaviour.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ask About Asthma

An inhaler being pressed so the mist of drug can be seen emerging
Image of inhaler by Cnordic CNordic from Pixabay

Mid-September, as many young people are heading back to school after their summer holiday, is Asthma Week where NHS England suggests that teachers, employers and government workers #AskAboutAsthma. The goal is to raise awareness of the experiences of those with asthma, and to suggest techniques to put in place to help children and young people with asthma live their best lives.

One of the key bits of kit in the arsenal of people with asthma is an inhaler. When used, an inhaler can administer medication directly into the lungs and airways as the user breathes in. In the case of those with asthma, an inhaler can help to reduce inflammation in the airways which might prevent air from entering the lungs, especially during an asthma attack.

It’s only recently, however, that inhalers are getting the technology treatment. Smart inhalers can help to remind those with asthma to take their medication as prescribed (a common challenge for those with asthma) as well as tracking their use which can be shared with doctors, carers, or parents. Some smart inhalers can also identify if the correct inhaler technique is being used. Researchers have been able to achieve this by putting the audio of people using an inhaler through a neural network (a form of artificial intelligence), which can then classify between a good and bad technique.

As with any medical technology, these smart inhalers need to be tested with people with asthma to check that they are safe and healthy, and importantly to check that they are better than the existing solutions. One such study started in Leicester in July 2024, where smart inhalers (in this case, ones that clip onto existing inhalers) are being given to around 300 children in the city. The researchers will wait to see if these children have better outcomes than those who are using regular inhalers.

This sort of technology is a great example of what computer scientists call the “Internet of Things” (IoT). This refers to small computers which might be embedded within other devices which can interact over the internet… think smart lights in your home that connect to a home assistant, or fridges that can order food when you run out. 

A lot of medical devices are being integrated into the internet like this… a smart watch can track the wearer’s heart rate continuously and store it in a database for later, for example. Will this help us to live happier, healthier lives though? Or could we end up finding concerning patterns where there are none?

Daniel Gill, Queen Mary University of London

More on…

Linked Research Papers

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Art Touch and Talk Tour Tech

A sculpture of a head and shouldrers, heavily textured with a network of lines and points
Image by NoName_13 from Pixabay

What could a blind or partially-sighted person get from a visit to an art gallery? Quite a lot if the art gallery puts their mind to it. Even more if they make use of technology. So much so, we may all want the enhanced experience.

The best art galleries provide special tours for blind and partially-sighted people. One kind involves a guide or curator explaining paintings and other works of art in depth. It is not exactly like a normal guided tour that might focus on the history or importance of a painting. The best will give both an overview of the history and importance whilst also giving a detailed description of the whole picture as well as the detail, emphasising how each part was painted. They might, for example, describe the brush strokes and technique as well as what is depicted. They help the viewer create a really detailed mental model of the painting.

One visually-impaired guide who now gives such tours at galleries such as Tate Britain, Lisa Squirrel, has argued that these tours give a much deeper and richer understanding of the art than a normal tour and certainly more than someone just looking at the pictures and reading the text as they wander around. Lisa studied Art History at university and before visiting a gallery herself reads lots and lots about the works and artists she will visit. She found that guided tours by sighted experts using guided hand movements in front of a painting helped her build really good internal models of the works in her mind. Combined with her extensive knowledge from reading, she wasn’t building just a picture of the image depicted but of the way it was painted too. She gained a deep understanding of the works she explored including what was special about them.

The other kind of tour art galleries provide is a touching tour. It involves blind and partially-sighted visitors being allowed to touch selected works of art as part of a guided tour where a curator also explains the art. Blind art lover, Georgina Kleege, has suggested that touch tours give a much richer experience than a normal tour, and should also be put on for all for this reason. It is again about more than just feeling the shape and so “working out its form that”seeing” what a sighted person would take in at a glance. It is about gaining a whole different sensory experience of the work: its texture, for example, not a lesser version just of what it looks like.

How might technology help? Well, the company, NeuroDigital Technologies, has developed a haptic glove system for the purpose. Haptic gloves are gloves that contain vibration pads that stimulate the skin of the person in different, very fine ways so as to fool the wearer’s brain into thinking it is touching things of different shapes and textures. Their system has over a thousand different vibration patterns to simulate different feelings of touching surfaces. They also contain sensors that determine the precise position of the gloves in space as the person moves their hands around.

The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail. More than that the team could augment the experience to give enhanced feelings in places in shadow, for example, or to emphasise different parts of the work.

A similar system could be applied to historical artifacts too: allowing people to “feel” not just see the Rosetta Stone, for example. Perhaps it could also be applied to paintings to allow a person to feel the brush strokes in a way that could just not otherwise be done. This would give an enhanced version of the experience Lisa felt was so useful of having her hand guided in front of a painting and the brush strokes and areas being described. Different colours might also be coded with different vibration patterns in this way allowing a series of different enhanced touch tours of a painting, first exploring its colours, then its brush strokes, and so on.

What about talking tours? Can technology help there? AIs can already describe pictures, but early versions at least were trained on the descriptions people have given to images on the Internet: “a black cat sitting on top of the TV looking cute”, The Mona Lisa: a young woman staring at you”. That in itself wouldn’t cut it. Neither would training the AI on the normal brief descriptions on the gallery walls next to works of art. However, art books and websites are full of detail and more recent AIs can give very detailed descriptions of art works if asked. These descriptions include what the picture looks like overall, the components, colours, brushstrokes and composition, symbolism, historical context and more (at least for famous paintings). With specific training from curators and art historians the AIs will only get better. What is still missing for a blind person though from the kind of experience Lisa has when experiencing painting with a guide, is the link to the actual picture in space – having the guide move her hand in front of the painting as the parts are described. However, all that is needed to fill that gap is to combine a chat-based AI with a haptic glove system (and provide a way to link descriptions to spatial locations on the image). Then, the descriptions can be linked to positions of a hand moving in space in front of a virtual version of the picture. Combine that with the kind of system already invented to help blind people navigate, where vibrations on a walking stick indicate directions and times to turn, and the gloves can then not only give haptic sensations of the picture in front of the picture or sculpture, but also guide the person’s movement over it.

Whether you have such an experience in a gallery, in front of the work of art, or in your own front room, blind and partially sighted people could soon be getting much better experiences of art than sighted people. At which point, as Georgina Kleege, suggested for normal touch tours, everyone else will likely want the full “blind” experience too.

Paul Curzon, Queen Mary University of London

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Pac-Man and Games for Girls

In the beginning video games were designed for boys…and then came Pac-Man.

Pac-man eating dots
Image by OpenClipart-Vectors from Pixabay

Before mobile games, game consoles and PC based games, video games first took off in arcades. Arcade games were very big earning 39 billion dollars at their peak in the 1980s. Games were loaded into bespoke coin-operated arcade machines. For a game to do well someone had to buy the machines, whether actual gaming arcades or bars, cafes, colleges, shopping malls, … Then someone had to play them. Originally boys played arcade games the most and so games were targeted at them. Most games had a focus on shooting things: games like asteroids and space invaders or had some link to sports based on the original arcade game Pong. Girls were largely ignored by the designers… But then came Pac-Man. 

Pac-Man, created by a team led by Toru Iwatani,  is a maze game where the player controls the Pac-Man character as it moves around a maze, eating dots while being chased by the ghosts: Blinky, Pinky, Inky, and Clyde. Special power pellets around the maze, when eaten, allow Pac-Man to chase the ghosts for a while instead of being chased.

Pac-Man ultimately made around $19 million dollars in today’s money making it the biggest money making video arcade game of all time. How did it do it? It was the first game that was played by more females than males. It showed that girls would enjoy playing games if only the right kind of games were developed. Suddenly, and rather ironically given its name, there was a reason for the manufacturers to take notice of girls, not just boys.

A Pac-man like ghost
Image by OpenClipart-Vectors from Pixabay

It revolutionised games in many ways, showing the potential of different kinds of features to give it this much broader appeal. Most obviously Pac-Man did this by turning the tide away from shoot-em up space games and sports games to action games where characters were the star of the game, and that was one of its inventor Toru Iwatani’s key aims. To play you control Pac-Man rather than just a gun, blaster, tennis racket or golf club. It paved the way for Donkey Kong, Super Mario, and the rest (so if you love Mario and all his friends, then thank Pac-Man). Ultimately, it forged the path for the whole idea of avatars in games too. 

It was the first game to use power ups where, by collecting certain objects, the character gains extra powers for a short time. The ghosts were also characters controlled by simple AI – they didn’t just behave randomly or follow some fixed algorithm controlling their path, but reacted to what the player does, and each had their own personality in the way they behaved.

Because of its success, maze and character-based adventure games became popular among manufacturers, but more importantly designers became more adventurous and creative about what a video game could be. It was also the first big step towards the long road to women being fully accepted to work in the games industry. Not bad for a character based on a combination of a pizza and the Japanese symbol for “mouth”.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Testing AIs in Minecraft

by Paul Curzon, Queen Mary University of London

What makes a good environment for child AI learning development? Possibly the same as for human child learning development: Minecraft.

Lego is one of the best games to play for impactful learning development for children. The word Lego is based on the words Play and Well in Danish. In the virtual world, Minecraft has of course taken up the mantle. A large part of why they are wonderful games is because they are open-ended and flexible. There are infinite possibilities over what you can build and do. They therefore help encourage not just focussing on something limited to learn as many other games do, but support open-ended creativity and so educational development. Given how positive it can be for children, it shouldn’t be surprising that Minecraft is now being used to help AIs develop too.

Games have long been used to train and test Artificial Intelligence programs. Early programs were developed to play and ultimately beat humans at specific games like Checkers, Chess and then later Go. That mastered they started to learn to play individual arcade games as a way to extend their abilities. A key part of our intelligence is flexibility though, we can learn new games. Aiming to copy this, the AIs were trained to follow suit and so became more flexible, and showed they could learn to play multiple arcade games well. 

This is still missing a vital part of our flexibility though. The thing about all these games is that the whole game experience is designed to be part of the game and so the task the player has to complete. Everything is there for a reason. It is all an integral part of the game. There are no pieces at all in a chess game that are just there to look nice and will never, ever play a part in winning or losing. Likewise all the rules matter. When problem solving in real life, though, most of the world, whether objects, the way things behave or whatever, is not there explicitly to help you solve the problem. It is not even there just to be a designed distractor. The real world also doesn’t have just a few distractors, it has lots and lots. Looking round my living room, for example, there are thousands of objects, but only one will help me turn on the tv.

AIs that are trained on games may, therefore, just become good at working in such unreal environments. They may need to be told what matters and what to ignore to solve problems. Real problems are much more messy, so put them in the real world, or even a more realistic virtual world, to problem solve and they may turn out to be not very clever at all. Tests of their skills that are based on such tasks may not really test them at all.

Researchers at the University of Witwatersrand in South Africa decided to tackle this issue, but using yet another game: Minecraft.  Because Minecraft is an open-ended virtual world, tackling challenges created in it will involve working in a world that is much more than just about the problem itself. The Witwatersrand team’s resulting MinePlanner system is a collection of 45 challenges, some easy, some harder. They include gathering tasks (like finding and gathering wood) and building tasks (like building a log cabin), as well as tasks that include combinations of these things. Each comes in three versions. In the easy version nothing is irrelevant. The medium version contains a variety of extraneous things that are not at all useful to the task. The hard version is in a full Minecraft world where there are thousands of objects that might be used.

To tackle these challenges an AI (or human) needs to solve not just the complex problem set, but also work out for themselves what in the Minecraft world is relevant to the task they are trying to perform and what isn’t. What matters and what doesn’t?

The team hope that by setting such tests they will help encourage researchers to develop more flexible intelligences, taking us closer to having real artificial intelligence. The problems are proposed as a benchmark for others to test their AIs against. The Witwatersrand team have already put existing state-of-the-art AI planning systems to the test. They weren’t actually that great at solving the problems and even the best could not complete the harder tasks.

So it is back to school for the AIs but hopefully now they will get a much better, flexible and fun education playing games like Minecraft. Let’s just hope the robots get to play with Lego too, so they don’t get left behind educationally.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Computers that read emotions

by Matthew Purver, Queen Mary University of London

One of the ways that computers could be more like humans – and maybe pass the Turing test – is by responding to emotion. But how could a computer learn to read human emotions out of words? Matthew Purver of Queen Mary University of London tells us how.

Have you ever thought about why you add emoticons to your text messages – symbols like 🙂 and :-@? Why do we do this with some messages but not with others? And why do we use different words, symbols and abbreviations in texts, Twitter messages, Facebook status updates and formal writing?

In face-to-face conversation, we get a lot of information from the way someone sounds, their facial expressions, and their gestures. In particular, this is the way we convey much of our emotional information – how happy or annoyed we’re feeling about what we’re saying. But when we’re sending a written message, these audio-visual cues are lost – so we have to think of other ways to convey the same information. The ways we choose to do this depend on the space we have available, and on what we think other people will understand. If we’re writing a book or an article, with lots of space and time available, we can use extra words to fully describe our point of view. But if we’re writing an SMS message when we’re short of time and the phone keypad takes time to use, or if we’re writing on Twitter and only have 140 characters of space, then we need to think of other conventions. Humans are very good at this – we can invent and understand new symbols, words or abbreviations quite easily. If you hadn’t seen the 😀 symbol before, you can probably guess what it means – especially if you know something about the person texting you, and what you’re talking about.

But computers are terrible at this. They’re generally bad at guessing new things, and they’re bad at understanding the way we naturally express ourselves. So if computers need to understand what people are writing to each other in short messages like on Twitter or Facebook, we have a problem. But this is something researchers would really like to do: for example, researchers in France, Germany and Ireland have all found that Twitter opinions can help predict election results, sometimes better than standard exit polls – and if we could accurately understand whether people are feeling happy or angry about a candidate when they tweet about them, we’d have a powerful tool for understanding popular opinion. Similarly we could automatically find out whether people liked a new product when it was launched; and some research even suggests you could even predict the stock market. But how do we teach computers to understand emotional content, and learn to adapt to the new ways we express it?

One answer might be in a class of techniques called semi-supervised learning. By taking some example messages in which the authors have made the emotional content very clear (using emoticons, or specific conventions like Twitter’s #fail or abbreviations like LOL), we can give ourselves a foundation to build on. A computer can learn the words and phrases that seem to be associated with these clear emotions, so it understands this limited set of messages. Then, by allowing it to find new data with the same words and phrases, it can learn new examples for itself. Eventually, it can learn new symbols or phrases if it sees them together with emotional patterns it already knows enough times to be confident, and then we’re on our way towards an emotionally aware computer. However, we’re still a fair way off getting it right all the time, every time.



Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Could AI end science?

by Nick Ballou, Oxford Internet Institute

Scientific fraud is worryingly common, though rarely talked about. It has been happening for years, but now Artificial Intelligence programs could supercharge it. If they do that could undermine Science itself.

Investigators of scientific fraud have found that large numbers of researchers have manipulated their results, invented data, or even produced nonsensical papers in the hope that no one will look closely enough to notice. Often, no one does. The problem is that science is built on the foundation of all the research that has gone before. If we can no longer trust that past research is legitimate, the whole system of science begins to break down. AI has the potential to supercharge this process.

We’re not at that point yet, luckily. But there are concerning signs that generative AI systems like ChatGPT and DALLE-E might bring us closer. By using AI technology, producing fraudulent research has never been easier, faster, or more convincing. To understand, let’s first look at how scientific fraud has been done in the past. 

How fraud happens 

Until recently, fraudsters would need to go through some difficult steps to get a fraudulent research paper published. A typical example might look like this: 

Step 1: invent a title

Fraudsters look for a popular but very broad research topic. We’ll take an example of a group of fraudsters known as the Tadpole Paper Mill. They published papers about cellular biology. To choose a new paper to create, the group would essentially use a simple generator, or algorithm, based on a template. This uses a simple technique first used by Christopher Strachey to write love letters in an early “creative” program in the 1950s.

For each “hole” in the template a word is chosen from a word list.

  1. Pick the name of a molecule
    • Either a protein name, a drug name or an RNA molecule name
    • eg mir-488
  2. Pick a verb
    • From alleviates, attenuates, exerts, …
    • eg inhibits
  3. Pick one or two cellular processes
    • From invasion, migration, proliferation, …
    • eg cell growth and metastasis
  4. Pick a cancer or cell type
    • From lung cancer, ovarian cancer, …
    • eg renal cell carcinoma
  5. Pick a connector word
    • From by, via, through, …
    • eg by
  6. Pick a verb
    • From activating, targeting, …
    • eg targeting
  7. Pick a name
    • Either a pathway, protein or miRNA molecule name
    • eg hMgn5

This produces a complicated-sounding title such as “mir-488 inhibits cell growth and metastasis in renal cell carcinoma by targeting hMgn5”. This is the name of a real fraudulent paper created this way.

Step 2: write the paper

Next, the fraudsters create the text of the paper. To do this, they often just plagiarise and lightly edit previous similar papers, substituting key words in from their invented title perhaps. To try to hide the plagiarism, they automatically swap out words, replacing them with synonyms. This often leads to ridiculous (and kind of hilarious) replacements, like these found in plagiarised papers: 

  • “Big data” –> “Colossal information” 
  • “Cloud computing” –> “Haze figuring”
  • “Developing countries” –> “Creating nations”
  • “Kidney failure” –> “Kidney disappointment”

Step 3: add in the results

Lastly, the fraudsters need to create results for the fake study. These usually appear in papers in the form of images and graphs. To do this, the fraudsters take the results from several previous papers and recombine them into something that looks mostly real, but is just a Frankenstein mess of other results that have nothing to do with the current paper.

A new paper is born

Using that simple formula, fraudsters have produced thousands of fabricated articles in the last 10 years. Even after a vast amount of effort, the dedicated volunteers who are trying to clean up the mess have only caught a handful. 

However, committing fraud like this successfully isn’t exactly easy, either: the fraudsters still need to come up with a research idea, write the paper themselves without copying too much from previous research, and make up results that look convincing—at least at first glance. 

AI: Adding fuel to the fire 

So what happens when we add modern generative AI programs into the mix? They are Artificial Intelligence programs like ChatGPT or DALL-E that can create text or pictures for you based on written requests. 

Well, the quality of the fraud goes up, and the difficulty of producing it goes way down. This is true for both text and images.

Let’s start with text. Just now, I asked ChatGPT-4 to “write the first two paragraphs of a research paper on a cutting edge topic in psychology.” I then asked it to “write a fake results table that shows a positive relationship between climate change severity and anxiety”. I won’t copy the whole thing—in part because I encourage you to try this yourself to see how it works (not to actually create a fake paper!)—but here’s a sample of what it came up with: 

“As the planet faces increasing temperatures, extreme weather events, and environmental degradation, the mental health repercussions for populations worldwide become a crucial area of investigation. Understanding these effects is vital for developing strategies to support communities in coping with the psychological challenges posed by a changing climate.”

As someone who has written many psychology research papers, I would find the results very difficult to identify as AI-generated—it looks and sounds very similar to how people in my field write, and it even generated Python code to analyse the fake data. I’d need to take a really close look at the origin of the data and so on to figure out that it’s fraudulent. 

But that’s a lot of work required from me as a fraud-buster. For the fraudster, doing this takes about 1 minute, and would not be detected by any plagiarism software in the way previous kinds of fraud can be. In fact, this might only be detected if the fraudsters make a sloppy mistake, like leaving in a disclaimer from the model as in the paper caught which included the text

“[Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual resutls should be included in the table.]”! 

Generative AIs are not close to human intelligence, at least not yet. So, why are they so good at producing convincing scientific research, something that’s commonly seen as one of the most difficult things humans can do? Two reasons play a big part: (1) scientific research is very structured, and (2) there’s a lot of training data. In any given field of research, most papers tend to look pretty similar—an introduction section, a method describing what the researchers did, a results section with a few tables and figures, and a discussion that links it back to the wider research field. Many journals even require a fixed structure. Generative AI programs work using Machine Learning – they learn from data and the more data they are given the better they become. Give a machine learning program millions of images of cats, telling it that is what they are, and it can become very good at recognising cats. Give it millions of images of dogs and it will be able to recognise dogs too. With roughly 3 million scientific papers published every year, generative AI systems are really good at taking these many, many examples of what a scientific report looks like, and producing similar sounding, and similarly structured pieces of text. They do it by predicting what word, sentence and paragraph would be good to come next based on probabilities calculated from all those examples.

Trusting future research

Most research can still be trusted, and the vast majority of scientists are working as hard as they can to advance human knowledge. Nonetheless, we all need to look carefully at research studies to ensure that they are legitimate, and we should be on extra alert as generative AI becomes even more powerful and widespread. We also need to think about how to improve universities and research culture generally, so that people don’t feel like they need to commit scientific fraud—something that usually happens because people are desperate to get or keep a job, or be seen as successful and reap the rewards. Somehow we need to change the game so that fraud no longer pays.

What do you think? Do you have ideas for how we can prevent fraud from happening in the first place, and how can we better detect it when it does occur? It is certainly an important new research topic. Find a solution and you could do massive good. If we don’t find solutions then we could lose the most successful tool human-kind has ever invented that makes all our lives better.


Related Magazines …

Cover issue 22 creativer computing
Cover issue 18 Machines that are creative

More on …


EPSRC supports this blog through research grant EP/W033615/1,

AMPER: AI helping future you remember past you

by Jo Brodie, Queen Mary University of London

Have you ever heard a grown up say “I’d completely forgotten about that!” and then share a story from some long-forgotten memory? While most of us can remember all sorts of things from our own life history it sometimes takes a particular cue for us to suddenly recall something that we’d not thought about for years or even decades. 

As we go through life we add more and more memories to our own personal library, but those memories aren’t neatly organised like books on a shelf. For example, can you remember what you were doing on Thursday 20th September 2018 (or can you think of a way that would help you find out)? You’re more likely to be able to remember what you were doing on the last Tuesday in December 2018 (but only because it was Christmas Day!). You might not spontaneously recall a particular toy from your childhood but if someone were to put it in your hands the memories about how you played with it might come flooding back.

Accessing old memories

In Alzheimer’s Disease (a type of dementia) people find it harder to form new memories or retain more recent information which can make daily life difficult and bewildering and they may lose their self-confidence. Their older memories, the ones that were made when they were younger, are often less affected however. The memories are still there but might need drawing out with a prompt, to help bring them to the surface.

old newspaper
Perhaps a newspaper advert will jog your memory in years to come… Image by G.C. from Pixabay

An EPSRC-funded project at Heriot-Watt University in Scotland is developing a tablet-based ‘story facilitator’ agent (a software program designed to adapt its response to human interaction) which contains artificial intelligence to help people with Alzheimer’s disease and their carers. The device, called ‘AMPER’*, could improve wellbeing and a sense of self in people with dementia by helping them to uncover their ‘autobiographical memories’, about their own life and experiences – and also help their carers remember them ‘before the disease’.

Our ‘reminiscence bump’

We form some of our most important memories between our teenage years and early adulthood – we start to develop our own interests in music and the subjects that we like studying, we might experience first loves, perhaps going to university, starting a career and maybe a family. We also all live through a particular period of time where we’re each experiencing the same world events as others of the same age, and those experiences are fitted into our ‘memory banks’ too. If someone was born in the 1950s then their ‘reminiscence bump’ will be events from the 1970s and 1980s – those memories are usually more available and therefore people affected by Alzheimer’s disease would be able to access them until more advanced stages of the disease process. Big important things that, when we’re older, we’ll remember more easily if prompted.

In years to come you might remember fun nights out with friends.
Image by ericbarns from Pixabay

Talking and reminiscing about past life events can help people with dementia by reinforcing their self-identity, and increasing their ability to communicate – at a time when they might otherwise feel rather lost and distressed. 

AMPER will explore the potential for AI to help access an individual’s personal memories residing in the still viable regions of the brain by creating natural, relatable stories. These will be tailored to their unique life experiences, age, social context and changing needs to encourage reminiscing.”

Dr Mei Yii Lim, who came up with the idea for AMPER(3).

Saving your preferences

AMPER comes pre-loaded with publicly available information (such as photographs, news clippings or videos) about world events that would be familiar to an older person. It’s also given information about the person’s likes and interests. It offers examples of these as suggested discussion prompts and the person with Alzheimer’s disease can decide with their carer what they might want to explore and talk about. Here comes the clever bit – AMPER also contains an AI feature that lets it adapt to the person with dementia. If the person selects certain things to talk about instead of others then in future the AI can suggest more things that are related to their preferences over less preferred things. Each choice the person with dementia makes now reinforces what the AI will show them in future. That might include preferences for watching a video or looking at photos over reading something, and the AI can adjust to shorter attention spans if necessary. 

Reminiscence therapy is a way of coordinated storytelling with people who have dementia, in which you exercise their early memories which tend to be retained much longer than more recent ones, and produce an interesting interactive experience for them, often using supporting materials — so you might use photographs for instance

Prof Ruth Aylett, the AMPER project’s lead at Heriot-Watt University(4).

When we look at a photograph, for example, the memories it brings up haven’t been organised neatly in our brain like a database. Our memories form connections with all our other memories, more like the branches of a tree. We might remember the people that we’re with in the photo, then remember other fun events we had with them, perhaps places that we visited and the sights and smells we experienced there. AMPER’s AI can mimic the way our memories branch and show new information prompts based on the person’s previous interactions.

​​Although AMPER can help someone with dementia rediscover themselves and their memories it can also help carers in care homes (who didn’t know them when they were younger) learn more about the person they’re caring for.

*AMPER stands for ‘Agent-based Memory Prosthesis to Encourage Reminiscing’.


Suggested classroom activities – find some prompts!

  • What’s the first big news story you and your class remember hearing about? Do you think you will remember that in 60 years’ time?
  • What sort of information about world or local events might you gather to help prompt the memories for someone born in 1942, 1959, 1973 or 1997? (Remember that their reminiscence bump will peak in the 15 to 30 years after they were born – some of them may still be in the process of making their memories the first time!).

See also

If you live near Blackheath in South East London why not visit the Age Exchange and reminiscence centre which is an arts charity providing creative group activities for those living with dementia and their carers. It has a very nice cafe.

Related careers

The AMPER project is interdisciplinary, mixing robots and technology with psychology, healthcare and medical regulation.

We have information about four similar-ish job roles on our TechDevJobs blog that might be of interest. This was a group of job adverts for roles in the Netherlands related to the ‘Dramaturgy^ for Devices’ project. This is a project linking technology with the performing arts to adapt robots’ behaviour and improve their social interaction and communication skills.

Below is a list of four job adverts (which have now closed!) which include information about the job description, the types of people that the employers were looking for and the way in which they wanted them to apply. You can find our full list of jobs that involve computer science directly or indirectly here.

^Dramaturgy refers to the study of the theatre, plays and other artistic performances.

Dramaturgy for Devices – job descriptions

More on …

1. Agent-based Memory Prosthesis to Encourage Reminiscing (AMPER) Gateway to Research
2. The Digital Human: Reminiscence (13 November 2023) BBC Sounds – a radio programme that talks about the AMPER Project.
3. Storytelling AI set to improve wellbeing of people with dementia (14 March 2022) Heriot-Watt University news
4. AMPER project to improve life for people with dementia (14 January 2022) The Engineer


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Virtual reality goggles for mice

by Paul Curzon, Queen Mary University of London

Conjure up a stereotypical image of a scientist and they likely will have a white coat. If not brandishing test tubes, you might imagine them working with mice scurrying around a maze. In future the scientists may well be doing a lot of programming, and the mice for their part will be scurrying around in their own virtual world wearing Virtual Reality goggles.

Scientists have long used mazes as away to test the intelligence of mice, to the point it has entered popular culture as a stereotypical thing that scientists in white lab coats do. Mazes do give ways to test intelligence of animals, including exploring their memory and decision making ability in controlled experiments. That can ultimately help us better understand how our brains work too, and give us a better understanding of intelligence. The more we understand animal cognition as well as human cognition, the more computer scientists can use that improved understanding to create more intelligent machines. It can also help neurobiologists find ways to improve our intelligence too.

Flowers for Algernon is a brilliant short story and later novel based on the idea, there using experiments on mice and humans to test surgery intended to improve intelligence. In a slightly different take on mice-maze experiments, Douglas Adams, in ‘The Hitchhikers Guide to the Galaxy’, famously claimed that the mice were actually pan-dimensional beings and these experiments were really incredibly subtle experiments the mice were performing on humans. Whatever the truth of who is experimenting on who, the experiments just took a great leap forward because scientists at Northwestern University have created Virtual Reality goggles for their mice.

For a long time researchers at Northwestern have used a virtual reality version of maze experiments, with mice running on treadmills with screens around them projecting what the researchers want them to see, whether mazes, predators or prey. This has the advantage of being much easier to control than using physical mazes, and as the mice are actually stationary the whole time , just running on a treadmill, brain-scanning technology can be used to see what is actually happening in their brains while facing these virtual trials. The problem though is that the mice, with their 180 degree vision, can still see beyond the edges of the screens. The screens also give no sense of 3 dimensions, when like us the mice naturally see in 3D. As the screens are not fully immersive, they are not fully natural and that could affect the behaviour of the mice and so invalidate the experimental results.

That is why the Northwestern researchers invented the mousey VR googles, the idea being that they would give a way to totally immerse the mice in their online world, and so improve the reliability of the experiments. In the current version the goggles are not actually worn by the mice, as they are still too heavy. Instead, the mouse’s head is held in place really close to them, but with the same effect of total immersion. Future versions may be small enough for the mice to wear them though.

The scientists have already found that the mice react more quickly to events, like the sight of a predator, than in the old set-up, suggesting that being able to see they were in a lab was affecting their behaviour. Better still, there are new kinds of experiment that can be done with this set up. In particular, the researchers have run experiments where an aerial predator like an owl appears from above the mice in a natural way. Mounting screens above them previously wasn’t possible as it got in the way of the brain scanning equipment. What does happen when a virtual owl appears? The mice either run faster or freeze, just as in the wild. This means that by scanning their brains while this is happening, how their perception of the threat works can be investigated, as well as how decision-making is taking place at the level of their brain activity. The scientists also intend to run similar experiments where the mouse is the predator, for example chasing a virtual fly too. Again this would not have been possible previously.

That in any case is what we think the purpose of these new experiments is. What new and infinitely subtle experiments it is allowing the pan-dimensional mice to perform on us remains to be seen.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1.