Understanding matters of the heart – creating accurate computer models of human organs

Colourful depiction of a human heart

by Paul Curzon, Queen Mary University of London

Ada Lovelace, the ‘first programmer’ thought the possibilities of computer science might cover a far wider breadth than anyone else of her time. For example, she mused that one day we might be able to create mathematical models of the human nervous system, essentially describing how electrical signals move around the body. University of Oxford’s Blanca Rodriguez is interested in matters of the heart. She’s a bioengineer creating accurate computer models of human organs.

How do you model a heart? Well you first have to create a 3D model of its structure. You start with MRI scans. They give you a series of pictures of slices through the heart. To turn that into a 3D model takes some serious computer science: image processing that works out, from the pictures, what is tissue and what isn’t. Next you do something called mesh generation. That involves breaking up the model into smaller parts. What you get is more than just a picture of the surface of the organ but an accurate model of its internal structure.

So far so good, but it’s still just the structure. The heart is a working, beating thing not just a sculpture. To understand it you need to see how it works. Blanca and her team are interested in simulating the electrical activity in the heart – how electrical pulses move through it. To do this they create models of the way individual cells propagate an electrical system. Once you have this you can combine it with the model of the heart’s structure to give one of how it works. You essentially have a lot of equations. Solving the equations gives a simulation of how electrical signals propagate from cell to cell.

The models Blanca’s team have created are based on a healthy rabbit heart. Now they have it they can simulate it working and see if it corresponds to the results from lab experiments. If it does then that suggests their understanding of how cells work together is correct. When the results don’t match, then that is still good as it gives new questions to research. It would mean something about their initial understanding was wrong, so would drive new work to fix the problem and so the models.

Once the models have been validated in this way – shown it is an accurate description of the way a rabbit’s heart works – they can use them to explore things you just can’t do with experiments – exploring what happens when changes are made to the structure of the virtual heart or how drugs change the way it works, for example. That can lead to new drugs.

They can also use it to explore how the human heart works. For example, early work has looked at the heart’s response to an electric shock. Essentially the heart reboots! That’s why when someone’s heart stops in hospital, the emergency team give it a big electric shock to get it going again. The model predicts in detail what actually happens to the heart when that is done. One of the surprising things is it suggests that how well an electric shock works depends on the particular structure of the person’s heart! That might mean treatment could be more effective if tailored for the person.

Computer modelling is changing the way science is done. It doesn’t replace experiments. Instead clinical work, modelling and experiments combine to give us a much deeper understanding of the way the world, and that includes our own hearts, work.


This article was originally published on the CS4FN website and a copy can be found on p16 of issue 20 of the CS4FN magazine, a free PDF copy of which can be downloaded by clicking the picture or link below, along with all of our free-to-download booklets and magazines.


Logo for CRY: Cardiac Risk in the Young

The charity Cardiac Risk in the Young raises awareness of cardiac electrical rhythm abnormalities and supports testing (electrocardiograms and echocardiograms) for all young people aged 14-35.


This blog is funded through EPSRC grant EP/W033615/1.

A storm in a bell jar

by Paul Curzon, Queen Mary University of London

(from the archive)

lightning
Image by FelixMittermeier from Pixabay 

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

by Paul Curzon and Peter W McOwan, Queen Mary University of London

(Updated from the archive)

Frankenstein's Monster
Image by sethJreid from Pixabay

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

A sodarace in action

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Ada Lovelace in her own words

by Ursula Martin, University of Oxford

(From the archive)

A jumble of letters

Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.

Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.

Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.

Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.

But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:

“There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”

the difficulty of getting things right:

“To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”

and the challenge of making things go faster:

“One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”

She explains how computing is about patterns:

“it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.

and inventing new ideas

“We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.

and being creative. If we knew the laws for composing music:

“the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

Alan Turing famously asked if a machine can think – Ada Lovelace got there first:

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

Wow, pretty amazing, for someone born 200 years ago.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Dickens knitting in code

by Paul Curzon, Queen Mary University of London

(From the archive)

Ball of wool pink and yellow close up
Image by Silvia Stödter from Pixabay 

Charles Dickens is famous for his novels highlighting Victorian social injustice. Despite what people say, art and science really do mix, and Dickens certainly knew some computer science. In his classic novel about the French Revolution, A Tale of Two Cities, one of his characters relies on some computer science based knitting.

Dickens actually moved in the same social circles as Charles Babbage, the Victorian inventor of the first computer (which he designed but unfortunately never managed to build) and Ada Lovelace the mathematician who worked with him on those first computers. They went to the same dinner parties and Dickens will have seen Babbage demonstrate his prototype machines. An engineer in Dickens novel, Little Dorrit, is even believed to be partly based on Babbage. Dickens was probably the last non-family member to visit Ada before she died. She asked him to read to her, choosing a passage from his book Dombey and Son in which the son, Paul Dombey, dies. Like Ada, Paul Dombey had suffered from illness all his life.

So Charles Dickens had lots of opportunity to learn about algorithms! His novel ‘A Tale of Two Cities’ is all about the French Revolution, but lurking in the shadows is some computer science. One of the characters, a revolutionary called Madame Defarge takes the responsibility of keeping a register of all those people who are to be executed once the revolution comes to pass: the aristocrats and “enemies of the people”. Of course in the actual French Revolution lots of aristocrats were guillotined precisely for being enemies of the new state.

Now Madame Defarge could have just tried to memorize the names on her ‘register’ as she supposedly has a great memory, but the revolutionaries wanted a physical record. That raises the problem, though, of how to keep it secret, and that is where the computer science comes in. Madame Defarge knits all the time and so she decides to store the names in her knitting.

“Knitted, in her own stitches and her own symbols, it will always be as plain to her as the sun. Confide in Madame Defarge. It would be easier for the weakest poltroon that lives, to erase himself from existence, than to erase one letter of his name or crimes from the knitted register of Madame Defarge.”

Computer scientists call this Steganography: hiding information or messages in plain sight, so that no one suspects they are there at all. Modern forms of steganography include hiding messages in the digital representation of pictures and in the silences of a Skype conversation.

Madame Defarge didn’t of course just knit French words in the pattern like a victorian scarf version of a T-shirt message. It wouldn’t have been very secret if anyone looking at the resulting scarf could read the names. So how to do it? In fact, knitting has been used as a form of steganography for real. One way was for a person to take a ball of wool and mark messages down it in Morse Code dots and dashes. The wool was then knitted into a jumper or scarf. The message is hidden! To read it you unpick it all and read the morse code back off the wool.

The names were “Knitted, in her own stitches and her own symbols”

That wouldn’t have worked for Madame Defarge though. She wanted to add the names to the register in plain view of the person as they watched and without them knowing what she was doing. She therefore needed the knitting patterns themselves to hold the code. It was possible because she was both a fast knitter and sat knitting constantly so it raised no suspicion. The names were therefore, as Dickens writes “Knitted, in her own stitches and her own symbols”

She used a ‘cipher’ and that brings in another area of computer science: encryption. A cipher is just an algorithm – a set of rules to follow – that converts symbols in one alphabet (letters) into different symbols. In Madame Defarge’s case the new symbols were not written but knitted sequences of stitches. Only if you know the algorithm, and a secret ‘key’ that was used in the encryption, can you convert the knitted sequences back into the original message.

In fact both steganography and encryption date back thousands of years (computer science predates computers!), though Charles Dickens may have been the first to use knitting to do it in a novel. The Ancient Greeks used steganography. In the most famous case a message was written on a slave’s shaved head. They then let the hair grow back. The Romans knew about cryptographic algorithms too and one of the most famous ciphers is called the Caesar cipher as Julius Caesar used it when writing letters…even in Roman times people were worried about the spies reading their equivalent of emails.

Dickens didn’t actually describe the code that Madame Defarge was using so we can only guess…but why not see that as an opportunity and (if you can knit) why not invent a way yourself. If you can’t knit then learn to knit first and then invent one! Somehow you need a series of stitches to represent each letter of the alphabet. In doing so you are doing algorithmic thinking with knitting. You are knitting your way to being a computer scientist.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Letters from the Victorian Smog: Braille: binary, bits & bytes

by Paul Curzon, Queen Mary University of London

Reading Braille image by Myriams-Fotos from Pixabay

We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information.That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.

Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.

Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.

The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.

He had invented the first modern form of writing based on binary.

Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.

Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.

The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.

**********************************

Further reading

This post was first published on CS4FN and also appears on page 7 of Issue 20 of the CS4FN magazine. You can download a free PDF copy of the magazine here as well as all of our previous magazines and booklets, at our free downloads site.

The RNIB has guidance for sighted people who might be producing Braille texts for blind people, about how to use Braille on a computer and get it ready for correct printing.

This History of Braille article also references an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk). Barbier’s system inspired Braille to create his.

A different way of representing letters is Morse Code which is a series of audible short and long sounds that was used to communicate messages very rapidly via telegraphy.

Find out about Abraham Louis Breguet’s ‘Tactful Watch‘ that let people work out what time it was by feel, instead of rudely looking at their watch while in company.

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).