Want to make sure your life turns out the way you want? Want to trade this life for fortune and fame? If you believe post-grunge rock band Nickelback’s 2005 hit single, then ‘you wanna be a Rockstar’! Love or hate the song, are they right, or do you really wanna be a tech entrepreneur?
Some people want a hedonistic life. Some want to be famous. Others just want to be stinking rich. Some want all three. Some want to really make a positive difference to people’s lives.
So are Nickelback right? What is the best way to get all three and maybe even the fourth too – and quickly – say before the age of 35? In fact, let’s not set our sights too low. Let’s aim to be one of the richest people in the world. Let’s think multi-billionaire. Let’s assume too that we have to do it without relying on accidents of birth – no inheritance of billions from Mummy and Daddy’s money to look forward to. Winning the lottery wouldn’t even get you close so, while luck matters, don’t rely on your luck alone either. How you actually gonna do it?
From the queues of people wanting to be on reality TV programmes whether X-factor, the Voice or Love Island most people seem to agree with Nickelback’s solution: the way to early riches is to become famous, whether a Rock Star or maybe a footballer, or a film star, or these days just famous for being famous. It’s people like that that fill the super-rich but young and self-made lists isn’t it. Well isn’t it?
Nice idea, but no.
Some of those people do make a lot of money in a short time. They have to though as for most their career is likely to be very short. They don’t stay famous or in the rich lists for long and are unlikely to make super-rich.
They are all Techno Stars.
There is one very obvious pattern to Forbes’ self-made super-rich list of the top billionaires on the planet. Almost a quarter of the top super rich, at around the time that Nickleback wrote their hit song, made their money in a similar way. They aren’t film stars, rock stars or sports stars. They are all techno stars. They are also all self-made billionaires. That contrasts with the other people in the same league. With one exception, the rest are all there because of family wealth or are old: they took their time to extreme wealth. Contrast that with the Google guys, say, who made the top 30 by their 30s.
Number one – the richest person on Earth with 56 billion dollars – iin 2007 was not surprisingly Bill Gates, who with Paul Allen (Number 19) set up Microsoft. Paul Allen went on to found Dreamworks, a company working on the boundaries of film-making and computer science. They went on to use much of their personal wealth (and time) solving humanitarian problems, focussing on things like health and education. Yes, many rock stars do charity gigs (think Live Aid) occasionally, so if saving the Earth is your aim then becoming a Rock star may be one way to give you some clout to make a difference. It’s nothing compared to what someone as rich as the Microsoft pair have personally achieved though.
Not far behind was Lawrence Ellison, worth 21.5 billion dollars at the time. He made his name by creating the company Oracle that was largely responsible for pushing the database revolution – not just using databases of course but creating the software that allows other people to use databases. As he’s said “Money is just a method of keeping score now.”
There are then the Google pair Sergey Brin and Larry Page sharing position 26. They only had 16 billion dollars each, but, hey, they only founded Google in 1986. They planned to “do no evil” with their riches and also wanted to plough money into charity. What else do you do when you have that kind of silly money?
At positions and 30 and 31 in the 2007 rich list came Michael Dell and Steven Ballmer. Ballmer is ‘just’ another Microsoft man. Dell of course is responsible for Dell computers. He had the ear of a President as he was on the United States President’s Council of Advisors on Science and Technology. Want to make a difference? He could.
Have things changed? Well, yes. Forbes now use tech themselves to keep a real-time rich list. Now of the top ten richest people in the world as I write this, 8 are tech entrepreneurs, now with hundreds of billions of worth each. Elon Musk (Tesla, X etc), Jeff Bezos (Amazon), Larry Ellison (Oracle), Mark Zuckerberg (FaceBook), Larry Page (Google), Sergey Brin (Google), Jensen Huang (Nvidia) and Steve Ballmer (Microsoft – now richer than Bill Gates but he is still filthy rich too) .
In short, programming/computer science/electronic engineering and inheritance are the most likely source of riches for the richest people in the world. Programming is the only way to reach the top without inheriting money (or perhaps being a Russian president’s protege).
The other advantage of the technology route to riches over the Rock Star way of course is you can aim higher still. Don’t wind up dead at 40 from the drug-induced lifestyle of rock stars – why not aim to still be enjoying being filthy rich at 100 too. If you are wise you may make the world a far better place, though you may also gain the power to make it far worse too.
When disasters involving technology occur, human error is often given as the reason, but even experts make mistakes using poor technology. Rather than blame the person, human error should be seen as a design failure. Bad design can make mistakes more likely and good design can often eliminate them. Optical illusions and magic tricks show how we can design things that cause everyone to make the same systematic mistake, and we need to use the same understanding of the brain when designing software and hardware. This is especially important if the gadgets are medical devices where mistakes can have terrible consequences. The best computer scientists and programmers don’t just understand technology, they understand people too, and especially our brain’s fallibilities. If they don’t, then mistakes using their software and gadgets are more likely. If people make mistakes, don’t blame the person, fix the design and save lives.
Illusions
Optical illusions and magic tricks give a mirror on the limits of our brains. Even when you know an optical illusion is an illusion you cannot stop seeing the effect. For example, this image of an eye is completely flat and stationary: nothing is moving. And yet if you move your head very slightly from side to side the centre pops out and seems to be moving separately to the rest of the eye.
Illusions occur because our brains have limited resources and take short cuts in processing the vast amount of information that our senses deliver. These short cuts allow us to understand what we see faster and do so with less resources. Illusions happen when the short cuts are applied in a way where they do not apply.
What this means is that we do not see the world as it really is but see a simplified version constructed by our subconscious brain and provided to our conscious brain. It is very much like in the film, the Matrix, except it is our own brains providing the fake version of the world we experience rather than alien computers.
Attention
The way we focus our attention is one example of this. You may think that you see the world as it is, but you only directly see the things you focus on, your brain fills out the rest rather than constantly feeding the actual information to you constantly. It does this based on what it last saw there but also on the basis of just completing patterns. The following illusion shows this in action. There are 12 black dots and as you move your attention from one to the next you can see and count them all. However, you cannot see them all at once. The ones in your peripheral vision disappear as you look away as the powerful pattern of grey lines takes over. You are not seeing everything that is there to be seen!
Our brains also have very limited working memory and limited attention. Magicians also exploit this to design “magical systems” where a whole audience make the same mistake at the same time. Design the magic well so that these limitations are triggered and people miss things that are there to be seen, forget things they knew a few moments before, and so on. For example, by distracting their attention they make them miss something that was there to be seen.
What does this mean to computer scientists?
When we design the way we interact with a computer system, whether software and hardware, it is also possible to trigger the same limitations a magician or optical illusion does. A good interaction designer therefore does the opposite to a magician and, for example: draws a user’s attention to things that must not be missed at a critical time; they ensure they do not forget things that are important, they help them keep track of the state of the system, they give good feedback so they know what has happened.
Most software is poorly designed leading to people making mistakes, not all the time, but some of the time. The best designs will help people avoid making mistakes and also help them spot and fix mistakes as soon as they do make them.
Examples of poor medical device design
The following are examples of the interfaces of actual medical devices found in a day of exploration by one researcher (Paolo Masci) at a single very good hospital (in the US).
When the nurse or doctor types the following key sequence as a drug dose rate:
Image by CS4FN
one infusion pump, without any explicit warning, other than the number being displayed, registered the number entered as 1001.
Probably, the programmer had been told that when doses are as large as 100, then fractional doses are so relatively small that they make no difference. A user typing in such fractional amounts, is likely making an error as such a dose is unlikely to be prescribed. The typing of the decimal point is therefore just ignored as a mistake by the infusion pump. Separately, (perhaps coded by a different programmer in the team, or at a different time) until the ENTER key is pressed the code treats the number as incomplete. Any further digits typed are therefore just accepted as part of the number.
A different design by a different manufacturer also treats the key sequence as 1001 (though in the case shown 1001 is rejected as it exceeds the maximum allowable rate, caused by the same issue of the device silently ignoring a decimal point).
This suggests two different coding teams indipendently coded in the same design flaw that led to the same user error.
What is wrong with that?
Devices should never silently ignore and/or correct input if bad mistakes are to be avoided. Here, that original design flaw, could lead to a dose 10x too big being infused into a patient and that could kill. It relies on the person typing the number noticing that the decimal point has been ignored (with no help from the device). Decimal points are small and easily missed of course. Also, their attention cannot be guaranteed to be on the machine and, in fact, with a digit keypad for entering numbers that attnetion is likely to be on the keys. Alarms or other distractions elsewhere could easily mean they do not notice the missing decimal point (which is a tiny thing to see).
An everyday example of the same kind of problem, showing how easily mistakes are missed is in auto-completion / auto-correction of spelling mistakes in texts and word processors. Goofs where an auto-corrected word are missed are very common. Anything that common needs to be designed away in a safety critical system.
Design Rules
One of the ways that such problems can be avoided is by programmers following interaction design rules. The machine (and the programmer writing the code) does not know what a user is trying to input when they make a mistake. One design rule is therefore that a program should therefore NEVER correct any user error silently. Here perhaps the mistake was pressing 0 twice rather than pressing the decimal point. In the case of user errors, the program should raise awareness of the error, and not allow further input until the error is corrected. The program should explicitly draw the person’s attention to the problem (eg changing colour, flashing, beeping, etc). This involves using the same understanding of cognitive psychology as a magician, to control their attention. Whereas a magician would be taking their attention away from the thing that matters, the programmer draws theur attention to it.
It should make clear in an easily understandable error message what the problem is (eg here “Doses over 99 should not include decimal fractions. Please delete the decimal point.”) It should then leave the user to make the correction (eg deleting the decimal point) not do it itself.
By following a design rule such as this programmers can avoid user errors, which are bound to happen, from causing a big problem.
Avoiding errors
Sometimes the way we design software interfaces and their interaction design we can do even better than this, though. We are letting people make mistakes and then telling them to help them pick up the pieces afterward. Sometimes we can do better than this and with better design help them avoid making the mistake in the first place or spot the mistake themselves as soon as they make it.
Doing this is again about controlling user attention as a magician does. An interaction designer needs to do this again in the opposite wayto the magician though, directing the users attention to the place it needs to be to see what is really happening as they take actions rather than away from it.
To use a digit keypad, the users attention has to be on their fingers so they can see where to put their fingers to press a given digit. They look at the keypad, not the screen. The design of the digit keypad draws their attention to the wrong place. However, there are lots of ways to enter numbers and the digit keypad is only one. One other way is to use cursor keys (left, right, up and down) and have a cursor on the screen move to the position where a digit will be changed. Now, once the person’s finger is on say the up arrow, attention naturally moves to the screen as that button is just pressed repeatedly until the correct digit is reached. The user is watching what is happening, watching the program’s output, rather than their input, so is now less likely to make a mistake. If they do overshoot, their attention is in the right place to see it and immediately correct it. Experiments showed that this design did lead to fewer large errors though is slower. With numbers though accuracy is more likely to matter than absolute speed, especially in medical situations.
There are still subtleties to the design though – should a digit roll over from 9 back to 0, for example? If it does should the next digit increase by 1 automatically? Probably not, as these are the things that lead to other errors (out by a factor of 10). Instead going up from 9 should lead to a warning.
Learn from magicians
Magicians are expert at making people make mistakes without them even realising they have. The delight in magic comes from being so easily fooled so that the impossible seems to have happened. When writing software we need to using the same understanding of our cognitive resources and how to manipulate them to prevent our users making mistakes. There are many ways to do this, but we should certainly never write software that silently corrects user errors. We should control the users attention from the outset using similar techniques to a magician so that their attention is in the right place to avoid problems. Ideally a number entry system such as using cursor keys to enter the number rather than a digit keypad should be used as then the attention of the user is more likely to be on the number entered in the first place.
Responsible for the design of not just the interface but how a device or software is used. Applying creativity and applying existing design rules to come up with solutions. Has a deep understanding both of technical issues and of the limitations of human cognition (how our brains work).
Usability consultant
Give advice on making software and gadgets generally easier to use, evaluate designs for features that will make them hard to use or increase the likelihood of errors, finding problems at an early stage.
User experience (UX) consultant
Give advice on ensuring users of software have a good positive experience and that using it is not for example, frustrating.
Medical device developer
Develop software or hardware for medical devices used in hospitals or increasingly in the home by patients. Could be improvements to existing devices or completely novel devices based on medical or biomedical breakthroughs, or on computer science breakthroughs, such as in artificial intelligence.
Research and Development Scientist
Do experiments to learn more about the way our brains work, and/or apply it to give computers and robots a way to see the world like we do. Use it to develop and improve products for a spin-off company.
What do a Nintendo games console and the films Jurassic Park, Beauty and the Beast and Terminator II have in common? They all used Marc Hannah’s chips and linked programs for their amazing computer effects..It is important that we celebrate the work of Black Computer Scientists and Marc is one who deserves the plaudits as much as anyone as his work has had a massive effect on the leisure time of everyone who watches movies with special effects or plays video games – and that is just about all of us.
In the early 1980s, with six others, Marc founded Silicon Graphics, becoming its principal scientist. Silicon Graphics was a revolutionary company, pioneering fast computers capable of running the kind of graphics programs on special graphics chips that suddenly allowed the film industry to do amazing special effects. Those chips and linked programs were designed by Marc.
Now computers and games consoles have special graphics chips that do fast graphics processing as standard, but it is Marc and his fellow innovators at Silicon Graphics who originally made it happen.
It all started with his work with James Clark on a system called the Geometry Engine while they were at Stanford. Their idea was to create chips that do all the maths needed to do sophisticated manipulation of imagery. VLSI (Very Large scale Integration), whereby computers were getting smaller and fitting on a chip was revolutionising computer design. Suddenly a whole microprocessor could be put on a single chip because tens of thousands (now billions) of transistors could be put on a single slice of silicon. They pioneered the idea of using VLSI for creating 3-D computer imagery, rather than just general-purpose computers, and with Silicon Graphics they turned their ideas into an industrial reality that changed both film and games industries for ever.
Silicon Graphics was the first company to create a VLSI chip in this way, not to be a general-purpose computer, but just to manipulate 3-D computer images.
A simple 3D image in a computer might be implemented as the vertices (corners) of a series of polygons. To turn that into an image on a flat screen needs a series of mathematical manipulations of those points’ coordinates to find out where they end up in that flat image. What is in the image depends on the position of the viewer and where light is coming from, for example. If the object is solid you also need to work out what is in front, so seen, and what behind, so not. Each time the object, viewer or light source moves, the calculations need to be redone. It is done as a series of passes doing different geometric manipulations in what is called a geometry pipeline and it is these calculations they focussed on. They started by working out which computations had to be really fast: the ones in the inner most loops of the code that did this image processing, so was executed over and over again. This was the complex code that meant processing images took hours or days because it was doing lots of really complex calculation. Instead of trying to write faster code though, they instead created hardware, ie a VLSI chip, to do the job. Their geometry pipeline did the computation in a lightening fast way as it was avoiding all the overhead of executing programs and instead implementing the calculations that slowed things down directly in logic gates that did all that crucial maths very directly and so really quickly.
The result was that their graphic pipeline chips and programs that worked with them became the way that CGI (computer generated imagery) was done in films allowing realistic imagery, and were incorporated into games consoles too, allowing for ever more realistic looking games.
So if some amazing special effects make some monster appear totally realistic this Halloween, or you get lost in the world of a totally realistic computer game, thank Marc Hannah, as his graphics processing chips originally made it happen.
Queens is a fairly simple kind of logic puzzle found for example on LinkedIn as a way to draw you back to the site. Doing daily logic puzzles is good both for mental health and to build logical thinking skills. As with programming, solving logic puzzles is mostly about pattern matching (also a useful skill to practice daily) rather than logic per se. The logic mainly comes in working out the patterns.
Let’s explore this with Queens. The puzzle has simple rules. The board is divided into coloured territories and you must place a Queen in each territory. However, no two Queens can be in the same row or column. Also no two Queens can be adjacent, horizontally, vertically or diagonally.
If we were just to use pure logic on these puzzles we would perhaps return to the rules themselves constantly to try and deduce where Queens go. That is perhaps how novices try to solve puzzles (and possibly get frustrated and give up). Instead, those who are good at puzzles create higher level rules that are derived from the basic rules. Then they apply (ie pattern match against) the new rules whenever the situation applies. As an aside this is exactly how I worked when using machine-assisted proof to prove that programs and hardware correctly met their specification, doing research into better ways to ensure the critical devices we create are correct.
Let’s look at an example from Queens. Here is a puzzle to work on. Can you place the 8 Queens?
mage by Paul Curzon
mage by Paul Curzon
Where to start? Well notice the grey territory near the bottom. It is a territory that lives totally in one column. If we go to the rules of Queens we know that there must be a Queen in this territory. That means that Queen must be in that column. We also know that only one Queen can be in a column. That means none of the other territories in that column can possibly hold a Queen there. We can cross them all out as shown.
In effect we have created a new derived inference rule.
IF a territory only has squares available in one column
THEN cross out all squares of other territories in that column
By similar logic we can create a similar rule for rows.
Now we can just pattern match against the situation described in that rule. If ever you see a territory contained completely in a row or column, you can cross out everything else in that row/column.
In our case in doing that it creates new situations that match the rule. You may also be able to work out other rules. One obvious new rule is the following:
IF a territory only has one free space left and no Queens
THEN put a Queen in that free space
mage by Paul Curzon
We can derive more complicated rules too. For example, we can generalise our first rule to two columns. Can you find a pair of territories that reside in the same two columns only? There is such a pair in the top right corner of our puzzle. If there is such a situation then as both must have a Queen, between them they must be the territories that provide the Queens for both those two columns. That means we can cross out all the squares from other territories in those two columns. We get the rule:
IF two territories only have squares available in two columns
THEN cross out all squares of other territories in both columns
Becoming good at Queens puzzles is all about creating more of these rules that quickly allow you to make progress in all situations. As you apply rules, new rules become applicable until the puzzle is solved.
Can you both apply these rules and if need be derive some more to pattern match your way to solving this puzzle?
It turns out that programming is a lot like this too. For a novice, writing code is a battle with the details of the semantics (the underlying logical meaning) of the language finding a construct that does what is needed. The more expert you become the more you see patterns where you have a rule you can apply to provide the code solution: IF I need to do this repeatedly counting from 1 to some number THEN I use a for loop like this… IF I have to process a 2 dimensional matrix of possibilities THEN I need a pair of nested for loops that traverse it by rows and columns… IF I need to do input validation THEN I need this particular structure involving a while loop… and so on.
Perhaps more surprisingly, research into expert behaviour suggests that is what all expert behaviour boils down to. Expert intuition is all about subconscious pattern matching for situations seen before turned into subconscious rules whether expert fire fighters or expert chess players. Now machine learning AIs are becoming experts at things we are good at. Not suprisingly, what machine learning algorithms are good at is spotting patterns to drive their behaviour.
Women have made vital contributions to computer science ever since Ada Lovelace debugged the first algorithm for an actual computer (written by Charles Babbage) almost 200 years ago (more on CS4FN’s Women Portal). Despite this, women make up only a fraction (25%) of the STEM workforce: only about a fifth of senior tech roles and only a fifth of computer science students are women. The problem starts early: research by the National Centre for Computing Education suggests that female student’s intension to study computing drops off between the ages of 8 and 13. Ilenia Maietta, a computer science student at Queen Mary, talks about her experiences of studying in a male-dominated field and how she is helping to build a network for other women in tech.
Ilenia’s love for science hasn’t wavered since childhood and she is now studying for a master’s degree in computer science – but back in sixth form, the decision was between computer science and chemistry:
“I have always loved science, and growing up my dream was to become a scientist in a lab. However, in year 12, I dreaded doing the practical experiments and all the preparation and calculations needed in chemistry. At the same time, I was working on my computer science programming project, and I was enjoying it a lot more. I thought about myself 10 years in the future and asked myself ‘Where do I see myself enjoying my work more? In a lab, handling chemicals, or in an office, programming?’ I fortunately have a cousin who is a biologist, and her partner is a software engineer. I asked them about their day-to-day work, their teams, the projects they worked on, and I realised I would not enjoy working in a science lab. At the same time I realised I could definitely see myself as a computer scientist, so maybe child me knew she wanted to be scientist, just a different kind.”
The low numbers of female students in computer science classrooms can have the knock-on effect of making girls feel like they don’t belong. These faulty stereotypes that women don’t belong in computer science, together with the behaviour of male peers, continue to have an impact on Ilenia’s education:
“Ever since I moved to the UK, I have been studying STEM subjects. My school was a STEM school and it was male-dominated. At GCSEs, I was the only girl in my computer science class, and at A-levels only one of two. Most of the time it does not affect me whatsoever, but there were times it was (and is) incredibly frustrating because I am not taken seriously or treated differently because I am a woman, especially when I am equally knowledgeable or skilled. It is also equally annoying when guys start explaining to me something I know well, when they clearly do not (i.e. mansplaining): on a few occasions I have had men explain to me – badly and incorrectly – what my degree was to me, how to write code or explain tech concepts they clearly knew nothing about. 80% of the time it makes no difference, but that 20% of the time feels heavy.”
Many students choose computer science because of the huge variety of topics that you can go on to study. This was the case for Ilenia, especially being able to apply her new-found knowledge to lots of different projects:
“Definitely getting to explore different languages and trying new projects: building a variety of them, all different from each other has been fun. I really enjoyed learning about web development, especially last semester when I got to explore React.js: I then used it to make my own portfolio website! Also the variety of topics: I am learning about so many aspects of technology that I didn’t know about, and I think that is the fun part.”
“I worked on [the portfolio website] after I learnt about React.js and Next.js, and it was the very first time I built a big project by myself, not because I was assigned it. It is not yet complete, but I’m loving it. I also loved working on my EPQ [A-Level research project] when I was in school: I was researching how AI can be used in digital forensics, and I enjoyed writing up my research.”
Like many university students, Ilenia has had her fair share of challenges. She discussed the biggest of them all: imposter syndrome, as well as how she overcame it.
“I know [imposter syndrome is] very common at university, where we wonder if we fit in, if we can do our degree well. When I am struggling with a topic, but I am seeing others around me appear to understand it much faster, or I hear about these amazing projects other people are working on, I sometimes feel out of place, questioning if I can actually make it in tech. But at the end of the day, I know we all have different strengths and interests, so because I am not building games in my spare time, or I take longer to figure out something does not mean I am less worthy of being where I am: I got to where I am right now by working hard and achieving my goals, and anything I accomplish is an improvement from the previous step.”
Alongside her degree, Ilenia also supports a small organisation called Byte Queens, which aims to connect girls and women in technology with community support.
“I am one of the awardees for the Amazon Future Engineer Award by the Royal Academy of Engineering and Amazon, and one of my friends, Aurelia Brzezowska, in the programme started a community for girls and women in technology to help and support each other, called Byte Queens. She has a great vision for Byte Queens, and I asked her if there was anything I could do to help, because I love seeing girls going into technology. If I can do anything to remove any barriers for them, I will do it immediately. I am now the content manager, so I manage all the content that Byte Queens releases as I have experience in working with social media. Our aim is to create a network of girls and women who love tech and want to go into it, and support each other to grow, to get opportunities, to upskill. At the Academy of Engineering we have something similar provided for us, but we wanted this for every girl in tech. We are going to have mentoring programs with women who have a career in tech, help with applications, CVs, etc. Once we have grown enough we will run events, hackathons and workshops. It would be amazing if any girl or woman studying computer science or a technology related degree could join our community and share their experiences with other women!”
For women and girls looking to excel in computer science, Ilenia has this advice:
“I would say don’t doubt yourself: you got to where you are because you worked for it, and you deserve it. Do the best you can in that moment (our best doesn’t always look the same at different times of our lives), but also take care of yourself: you can’t achieve much if you are not taking care of yourself properly, just like you can’t do much with your laptop if you don’t charge it. And finally, take space: our generation has the possibility to reframe so much wrongdoing of the past generations, so don’t be afraid to make yourself, your knowledge, your skills heard and valued. Any opportunities you get, any goals you achieve are because you did it and worked for it, so take the space and recognition you deserve.”
Ilenia also highlighted the importance of taking opportunities to grow professionally and personally throughout her degree, “taking time to experiment with careers, hobbies, sports to discover what I like and who I want to become” mattered enormously. Following her degree, she wants to work in software development or cyber security. Once the stress of coursework and exams is gone, Ilenia intends to “try living in different countries for some time too”, though she thinks that “London is a special place for me, so I know I will always come back.”
Ilenia encourages all women in tech who are looking for a community and support, to join the Byte Queens community and share with others: “the more, the merrier!”
– lenia MaiettaandDaniel Gill, Queen Mary University of London
Biologists often analyse data about the cell biology of living animals to understand their development. A large part of this involves looking for patterns in the data to use to refine their understanding of what is going on. The trouble is that patterns can be hard to spot when hidden in the vast amount of data that is typically collected. Humans are very good at spotting patterns in sound though – after all that is all music is. So why not turn the data into sound to find these biological patterns?
In hospitals, the heartbeats of critically ill patients are monitored by turning the data from heart monitors into sounds. Under the sea, in (perhaps yellow) submarines, “golden ear” mariners use their listening talent to help with navigation and detect potential danger for fish and the submarine. They do this by listening to the soundscapes produced by sonar built up from echoes from the objects round about. This way of using sounds to represent other kinds of data is called ‘sonification’. Perhaps similar ideas can help to find patterns in biological data? An interdisciplinary team of researchers from Queen Mary including biologist Rachel Ashworth, Audio experts Mathieu Barthet and Katy Noland and computer scientist William Marsh tried the idea out on the zebrafish. Why zebrafish? Well, they are used lots for the study of the development of vertebrates (animals with backbones). In fact it is what is called a ‘model organism’: a creature that lots of people do research on as a way of building a really detailed understanding of its biology. The hope is that what you learn about zebrafish will help you understand the biology of other vertebrates too. Zebrafish make a good model organism because they mature very quickly. Their embryos are also transparent. That is really useful when doing experiments because it means you can directly see what is going on inside their bodies using special kinds of microscopes.
The particular aspect of zebrafish biology the Queen Mary team has been investigating is the way calcium signals are used by the body. Changes in the concentration of calcium ions are important as they are used inside a cell to regulate its behaviour. These changes can be tracked in zebrafish by injecting fluorescent dyes into cells. Because the zebrafish embryos are transparent whatever has been fluorescently labelled can then be observed.
Calcium ions are used inside a cell to regulate its behaviour
The Queen Mary team developed software that detects calcium changes by automatically spotting the peaks of activity over time. They relied on a technique that is used in music signal processing to detect the start of notes in musical sequences. Finding the peaks in a zebrafish calcium signal and the notes from the Beatles’ Day Tripper riff may seem to be light years apart, but from a signal processing point of view, the problems are similar. Both involve detecting sudden burst of energy in the signals. Once the positions of the calcium peaks have been found they can then be monitored by sonifying the data.
What the team found using this approach is that the calcium activity in the muscle cells of zebrafish varies a lot between early developmental stages of the embryo and the late ones. You can have a go at hearing the difference yourself – listen to the sonified versions of the data.
Train timetables are complex. When designing a timetable for railways you have to think about the physical capabilities of the actual train, what stops it needs to make, whether it is carrying passengers or freight, the number of platforms at a station, the gradient of the track, and the placement of passing loops on single-track sections, amongst many other things. Data visualisation can help with timetabling and make sure our railways continue to run on track!
Data visualisation is an important area in computer science. If you had a huge amount of complex data in a spreadsheet, your first thought wouldn’t be to sit down with a cup of tea and spend hours reading through it – instead you might graph it or create an infographic to get a better picture. Humans are very bad at understanding and processing raw data, so we speed up the process by converting it to something easier to understand.
Timetabling is like this – we need to consider the arrival and departure times from all stations for each train. You might have used a (perhaps now) old fashioned paper timetable, with each train as a column, and the times at each station along the rows, like the one below. This is great if you’re a passenger… you can see clearly when your train leaves, and when it gets to your desired destination. If you’re unlucky enough to miss a train, you can also easily scan along to find the next one.
Image by Daniel Gill for CS4FN
Unfortunately, this kind of presentation might be more challenging for timetable designers. In this timetable, there’s a mix of stopping and fast services. You can see which of them are fast based on the number of stations they skip (marked with a vertical line), but, because they travel at different speeds it’s difficult to imagine where they are on the railway line at any one time.
One of the main challenges in railway timetabling, and perhaps the most obvious, is that trains can’t easily overtake slower ones in front of them. it’s this quirk that causes lots of problems. So, if you needed to insert another train into this timetable you would need to consider all the departure times of the trains around it, to make sure there is no conflicts – this is a lot of data to juggle.
But there’s an easier way to visualise these timetables: introducing Marey charts! They represent a railway on a graph, with stations listed vertically, time along the top, and each train represented by a single (bumpy) line. If we take our original timetable from above and convert it to a Marey chart, we get something that looks like this:
Image by Daniel Gill for CS4FN
Though thought to have been invented by a lesser-known railway engineer called Charles Ibry, these charts were popularised by Étienne-Jules Marey, and (perhaps unfairly) take his name.
How does it work?
There are a few things that you might notice immediately from this diagram. The stations along the side aren’t equally spaced, like you might expect from other types of graph, instead they are spaced relative to the distance between the stations on the actual railway. This means we can estimate when a fast train will pass each of the stations. This is an estimation, of course, because the train won’t be travelling at a constant speed throughout – but it’s better than our table from before which is no help at all!
Given this relative spacing, we can also estimate how fast a train is going. The steepness of the line, in this diagram, directly reflects the speed of the train*. Look at the dark blue and purple trains – they both leave Coventry really close together, but the purple train is a bit slower, so the gap widens near Birmingham International. We can also see that trains that do lots of stopping (when the line is horizontal) travel at a much slower average speed than the fast ones: though that shouldn’t be a surprise!
*There’s a fun reason that this is the case. The gradient (the steepness of the line) is calculated as the change iny divided by the change in x. In this case, the change in the y dimension is the distance the train has travelled, and the change in x is the time it has taken. If you have studied physics, you might immediately recognise that distance divided by time is speed (or velocity). Therefore, the steepness in a Marey chart is proportional to the speed of the train.
We can also see that the lines don’t intersect at all. This is good, because, quite famously, trains can’t really overtake. If there was an intersection it would mean that at some point, two trains would need to be at the same location at the same time. Unless you’ve invented some amazing quantum train (more about the weirdness of quantum technology in this CS4FN article), this isn’t possible!
Putting it to the Test
Put yourself in the shoes of a railway timetable designer! We have just heard that there is a freight train that needs to run through our little section of railway. The driver needs to head through sometime between 10:45 and 12:15 – how very convenient: we’ve already graphed that period of time.
The difficulty is, though, that their freight train is going to take a very slow 45 minutes to go through our section of railway – how are we going to make it fit? Let’s use the Marey chart to solve this problem visually. Firstly, we’ll put a line on that matches the requirements of the freight train:
Image by Daniel Gill for CS4FN
And then let’s re-enable all the other services.
Image by Daniel Gill for CS4FN
Well, that’s not going to work. We can see from this, though, how slow this freight train actually is, especially compared to the express trains its overlaps with. So, to fix this, we can shift it over. We want to aim for a placement where there are no overlaps at all.
Image by Daniel Gill for CS4FN
Perfect, now it’s not going to be able to make the journey without interfering with our other services at all.
Solving Problems
When we’re given a difficult problem, it’s often a good idea to find a way to visualise it (or as my A-Level physics teacher often reminded me: “draw a diagram!”). This kind of visualisation is used regularly in computer science. From students learning the craft, all the way to programmers and academics at the top of their field – they all use diagrams to help understand a problem.
Mission Impossible always involved the team taking on apparently impossible missions, delivered by a message concluding with the famous line that “This message will self-destruct in 10 seconds”. It was always followed by the message physically destructing in some dramatic way such as flames or smoke coming from the tape recorder. Now, it’s been shown that it is possible to actually do apparently impossible destruction of messages: to send holographic messages that the sender can just make disappear even after they have been sent. It relies on the apparently impossible, but real properties of quantum physics.
A hologram is a 3-dimensional image formed using laser light. It records light scattered from objects coming from lots of different directions. This differs from photography where the light recorded comes from one direction only. You can see examples on the back of bank cards (often a flying dove) where they are used as a hard-to-copy security device.
Now researchers at the University of Exeter have shown it is possible to make quantum holograms that make use of quantum effects. They are made from entangled photons: pairs of light particles that have been linked together in a way that means that, after the entangling, what ever happens to one immediately affects the other too … however far apart they are. Entanglement is one of those weird properties of quantum physics, the physical properties of the very, very small. It means that subatomic particles, once entangled, can later instantly affect each other even when separated by large distances.
This effect has now been put to novel use by Jensen Li and team in their research at Exeter. They entangled streams of pairs of photons emitted from a crystal using lasers but then separated the pairs. One stream of photons from the pairs was used to create a holographic image on a special kind of material called a meta-material. Meta-materials are just materials engineered at very tiny scales so as to have properties not usually seen in nature. For example, they might be designed to carefully control light or radio waves by reflecting them very precisely in certain directions. One use of that might be so that the object bounces light round from behind it so appears invisible. Some butterfly wings and bird feathers (think peacocks and kingfishers) actually do a similar sort of thing with very precise microscopic scale surface structures that cause their startlingly bright, shimmering colours.
Exeter’s meta-material was flat but with a special surface designed to have tiny features that manipulate light in very precise ways that create a hologram based on the information encoded in the beam of laser light. In their first test that showed their quantum hologram system works, the hologram just showed the letters H,D,V, A. The light from this hologram continued on to a camera, so a picture of the hologram could be taken. So far so normal.
The cunning (and rather weird) thing though is due to what they did to the other stream of light. Each photon in this stream was entangled with a photon in the hologram light stream. Due to the quantum physics of entanglement, that meant that changes to these particles could affect those making the hologram. In particular, the Exeter team had this second stream pass through a polarising filter, essentially like the lens of polaroid sunglasses. Light vibrates in different directions. A sunglasses lens cuts out the light vibrating in a given direction. Now, the letter H in the message was created from light polarised horizontally unlike the other letters which were polarised vertically. This meant that when the second stream of light was passed through a polarising filter blocking out the horizontally polarised light, it also affected the photons entangled with the blocked photons. The other stream of light, that created the hologram, was affected even though it went nowhere near the polarising filter. The result was that the horizontally polarised H could be made to disappear from the message caught on camera. It really did self-destruct, just in a quantum way.
If scaled up such a system could be used to send messages that are still (instantly) controlled by the sender even after they have been sent, whether disappearing or being changed to say something else. The approach could also be incorporated into secure quantum computing communication systems, where the messages are also encrypted.
Fortunately, this blog is not a quantum blog, so will not self-destruct in 10 seconds … so please do share it with your friends!
What is an algorithm? It is just a set of instructions that if followed precisely and in the given order, guarantees some result. The concept is important to computer scientists because all computers can do is follow instructions, but they do so perfectly. Computers do not understand what they are doing so can’t do anything but follow their instructions. That means whatever happens the instructions must always work. We can see what we mean by an algorithm by comparing it to the idea of a self-working trick from conjuring.
If you follow the steps of a self-working trick you will have the magic effect even if you have no idea how it works. Below is a a demonstration of a self-working magic jigsaw trick (you can download it from here https://conjuringwithcomputation.wordpress.com/resources/ print it, cut out the pieces and do it yourself, following the instructions below).
Image by CS4FN
The step of the trick are.
1) Count the robots (ignore the green monsters and the robot dog)….There are 17.
2) Swap the top two pieces on the left with those on the right lining the jigsaw back up
3) Count the robots ….There are 16. One has disappeared.
Magically a robot has disappeared! Which one disappears and where did it go? Was it swallowed by a green monster, did it teleport away?
How did that happen anyway?
Image by CS4FN
By following the steps you can make the trick work…even if you haven’t worked out how it works, a robot still disappears. You do not need to understand, you just need to be able to follow instructions. It is a self-working trick. Follow the steps of the trick exactly and the robot disappears. It is just an algorithm. Self-working tricks are just algorithms for doing magic. When you follow the steps of the trick you are acting like a computer, blindly following the instructions in its program!
Mid-September, as many young people are heading back to school after their summer holiday, is Asthma Week where NHS England suggests that teachers, employers and government workers #AskAboutAsthma. The goal is to raise awareness of the experiences of those with asthma, and to suggest techniques to put in place to help children and young people with asthma live their best lives.
One of the key bits of kit in the arsenal of people with asthma is an inhaler. When used, an inhaler can administer medication directly into the lungs and airways as the user breathes in. In the case of those with asthma, an inhaler can help to reduce inflammation in the airways which might prevent air from entering the lungs, especially during an asthma attack.
It’s only recently, however, that inhalers are getting the technology treatment. Smart inhalers can help to remind those with asthma to take their medication as prescribed (a common challenge for those with asthma) as well as tracking their use which can be shared with doctors, carers, or parents. Some smart inhalers can also identify if the correct inhaler technique is being used. Researchers have been able to achieve this by putting the audio of people using an inhaler through a neural network (a form of artificial intelligence), which can then classify between a good and bad technique.
As with any medical technology, these smart inhalers need to be tested with people with asthma to check that they are safe and healthy, and importantly to check that they are better than the existing solutions. One such study started in Leicester in July 2024, where smart inhalers (in this case, ones that clip onto existing inhalers) are being given to around 300 children in the city. The researchers will wait to see if these children have better outcomes than those who are using regular inhalers.
This sort of technology is a great example of what computer scientists call the “Internet of Things” (IoT). This refers to small computers which might be embedded within other devices which can interact over the internet… think smart lights in your home that connect to a home assistant, or fridges that can order food when you run out.
A lot of medical devices are being integrated into the internet like this… a smart watch can track the wearer’s heart rate continuously and store it in a database for later, for example. Will this help us to live happier, healthier lives though? Or could we end up finding concerning patterns where there are none?