What was the first technology for recording music: CDs? Records? 78s, The phonograph? No. Trained songbirds came before all of them.
Composer, musician, engineer and visiting fellow at Goldsmiths University, Sarah Angliss, usually has a robot on stage performing live with her. These robots are not slick high tech cyber-beings, but junk modelled automata. One, named Hugo, sports a spooky ventriloquist dolls head! Sarah builds and programs her robots, herself.
She is also a sound historian, and worked on a Radio 4 documentary, ‘The Bird Fancyer’s Delight‘, uncovering how birds have been used to provide music across the ages. During the 1700’s people trained songbirds to sing human invented tunes in their homes. You could buy special manuals showing how to train your pet bird. By playing young birds a tune over and over again, and in the absence of other birds to put them right, they would adopt that song as their own. Playing the recorder was one way to train them, but special instruments were also invented to do the job automatically.
With the invention of the phonograph, home songbird popularity plummeted but it didn’t completely die out. Blackbirds, thrushes, canaries, budgies, bullfinches and other songbirds have continued to be schooled to learn songs that they would never sing in the wild.
Website whimsy 1 – the UK Government (surprisingly)!
If you’re reading this post today (Monday 26th August 2024) it’s a Bank Holiday in England & Wales and in Northern Ireland. The UK Government’s website has a page https://www.gov.uk/bank-holidays which lists all the upcoming dates for the next two years’ worth of bank holidays (so people can put them in the diaries). But… if you visit the page on a Bank Holiday then you’ll be met with some bunting, which isn’t there on the other days. If you’re reading this post tomorrow then you’ll have to wait until the next Bank Holidays, which are Christmas, Boxing Day then New Year’s Day. On those days the bunting changes to tinsel!
Website whimsy 2 – Wikipedia
Throughout history people have tried to make money and sometimes they do so rather dishonestly. Occasionally people claim that they have a qualification in a subject – in the hope that this will make people trust them, and give them money.
Normally it takes time and effort to get a genuine qualification (and for some qualifications it costs money too). A qualification that is suspiciously easy to get (just make a payment and then get a certificate) raises red flags and if someone thinks someone else’s qualification might be fraudulent there’s a fun way they can draw attention to this.
There’s a page on Wikipedia currently called “List of animals awarded human credentials” (but it used to have the even better name “List of animals with fraudulent diplomas” (and before that it was “List of cats with fraudulent diplomas” until people started adding more animals!)). It’s full of examples where people filled in an online form and paid a fee, using their pet’s name as the ‘person’ to be awarded. If someone claims a particular credential but someone else was able to get the same credential for their dog by paying $50 and filling in a form… well, don’t give them any money!
Try your hand at making 3D pictures with stereoscopy
Bank Holiday weather is famous for being inconveniently changeable but if the rain holds off and you’re out and about with a smartphone camera (or any camera) then you can quite easily create some 3D images with them (you can do this indoors too of course). We’ve put together some basic instructions (below) and some more detailed information, in case teachers might also like to try this in class some time. (If the weather is off-putting there are also some ready-to-use images in that link).
Basic instructions: take two photos a few inches apart, line them up, gaze between them and adjust your focus until a third image appears between them which combines the two images into one that has depth perception. This is just recreating what your eyes do naturally every day – combining what your left eye and your right eye see separately into one view of the world around you.
Photographs and icons created by Jo Brodie for CS4FN.
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
This blog is funded by EPSRC on research agreement EP/W033615/1.
The nurse types in a dose of 100.1 mg [milligrams] of a powerful drug and presses start. It duly injects 1001 mg into the patient without telling the nurse that it didn’t do what it was told. You wouldn’t want to be that patient!
Designing a medical device is difficult. It’s not creating the physical machine that causes problems so much as writing the software that controls everything that that machine does. The software is complex and it has to be right. But what do we mean by “right”? The most obvious thing is that when a nurse sets it to do something, that is exactly what it does.
Getting it right is subtler than that though. It must also be easy to use and not mislead the nurse: the human-computer interface has to be right too. It is the software that allows you to interact with a gadget – what buttons you press to get things done and what feedback you are given. There are some basic principles to follow when designing interfaces. One is that the person using it should always be clearly told what it is doing.
Manufacturers need ways to check their devices meet these principles: to know that they got it right.
It’s not just the manufacturers, though. Regulators have the job of checking that machines that might harm people are ‘right’ before they allow them to be sold. That’s really difficult given the software could be millions of lines long. Worse they only have a short time to give an answer.
Million to one chances are guaranteed to happen.
Problems may only happen once in a million times a device is used. They are virtually impossible to find by having someone try possibilities to see what happens, the traditional way software is checked. Of course, if a million devices are bought, then a million to one chance will happen to someone, somewhere almost immediately!
Paolo Masci at Queen Mary University of London, has come up with a way to help and in doing so found a curious problem. He’s been working with the US regulator for medical devices – the FDA – and developed a way to use maths to find problems. It involves creating a mathematical description of what critical parts of the interface program do. Properties, like the user always knowing what is going on, can then be checked using maths. Paolo tried it out on the code for entering numbers of a real medical device and found some subtle problems. He showed that if you typed in certain numbers, the machine actually treated them as a number ten times bigger. Type in a dose of 100.1 and the machine really did set the dose to be 1001. It ignored the decimal point because on such a large dose it assumed small fractions are irrelevant. However another part of the code allows you to continue typing digits. Worse still the device ignores that decimal point silently. It doesn’t make any attempt to help a nurse notice the change. A busy nurse would need to be extremely vigilant to see the tiny decimal point was missing given the lack of warning.
A useful thing about Paolo’s approach is that it gives you the button presses that lead to the problem. With that you can check other devices very quickly. He found that medical devices from three other manufacturers had exactly the same problem. Different teams had all programmed in the same problem. None had thought that if their code ignored a decimal point, it ought to warn the nurse about it rather than create a number ten times bigger. It turns out that different programmers are likely to think the same way and so make the same mistakes (see ‘Double or Nothing‘).
Now the problem is known, nurses can be warned to be extra careful and the manufacturers can update the software. Better still they and the regulators now have an easy way to check their programmers haven’t made the same mistake in future devices. In future, whether vigilant or not, a nurse won’t be able to get it wrong.
Ariane 5 on the launch pad. Photo Credit: (NASA/Chris Gunn) Public Domain via Wikimedia Commons.
If you spent billions of dollars on a gadget you’d probably like it to last more than a minute before it blows up. That’s what happened to a European Space Agency rocket. How do you make sure the worst doesn’t happen to you? How do you make machines reliable?
A powerful way to improve reliability is to use redundancy: double things up. A plane with four engines can keep flying if one fails. Worried about a flat tyre? You carry a spare in the boot. These situations are about making physical parts reliable. Most machines are a combination of hardware and software though. What about software redundancy?
You can have spare copies of software too. Rather than a single version of a program you can have several copies running on different machines. If one program goes wrong another can take over. It would be nice if it was that simple, but software is different to hardware. Two identical programs will fail in the same way at the same time: they are both following the same instructions so if one goes wrong the other will too. That was vividly shown by the maiden flight of the Ariane 5 rocket. Less than 40 seconds from launch things went wrong. The problem was to do with a big number that needed 64 bits of storage space to hold it. The program’s instructions moved it to a storage place with only 16 bits. With not enough space, the number was mangled to fit. That led to calculations by its guidance system going wrong. The rocket veered off course and exploded. The program was duplicated, but both versions were the same so both agreed on the same wrong answers. Seven billion dollars went up in smoke.
Can you get round this? One solution is to get different teams to write programs to do the same thing. The separate teams may make mistakes but surely they won’t all get the same thing wrong! Run them on different machines and let them vote on what to do. Then as long as more than half agree on the right answer the system as a whole will do the right thing. That’s the theory anyway. Unfortunately in practice it doesn’t always work. Nancy Leveson, an expert in software safety from MIT, ran an experiment where different programmers were given programs to write. She found they wrote code that gave the same wrong answers. Even if it had used independently written redundant code it’s still possible Ariane 5 would have exploded.
Redundancy is a big help but it can’t guarantee software works correctly. When designing systems to be highly reliable you have to assume things will still go wrong. You must still have ways to check for problems and to deal with them so that a mistake (whether by human or machine) won’t turn into a disaster.
The World Health Organisation currently estimates that around 1.3 billion people, or one in six people on Earth, “experience significant disability”. Designers who are creating devices and tools for people to use need to make sure that the products they develop can be used by as many people as possible, not just non-disabled people, to make sure that everyone can benefit from them.
Disabled people can face lots of barriers in the workplace including some that seem simple to address – problems using everyday ICT and other tech. While there are a lot of fantastic Assistive Technology (AT) products unfortunately not all are suitable and so are abandoned by disabled people as they don’t serve their needs.
One challenge is that some of the people who have been doing the designing might not have direct experience of disability themselves, so they are less able to think about their design from that perspective. Solutions to this can include making sure that disabled computer scientists and human-computer interaction researchers are part of the team of designers and creators in the first place, or by making it easier for other disabled people to be involved at an early stage of design. This means that their experience and ideas can contribute to making the end product more relevant and useful for them and others. Alongside this there is education and advocacy – helping more young computer scientists, technologists and human-computer interaction designers to start thinking early about how their future products can be more inclusive.
An EPSRC project “Inclusive Public Activities for information and Communication Technologies” has been looking at some practical ways to help. Run by Prof. Cathy Holloway and Dr. Maryam Bandukda and their wider team at UCL they have established a panel of disabled academics and professionals who can be ‘critical friends’ to researchers planning new projects. By co-creating a set of guidelines for researchers they are providing a useful resource but it also means that disabled voices are heard at an early stage of the design process so that projects start off in the right direction.
Prof. Holloway and Dr. Bandukda are based at the Global Disability Innovation Hub (GDI Hub) in the department of computer science at UCL. GDI Hub is a global leader in disability innovation and inclusion and has research reaching over 30 million people in 60 countries. The GDI Hub also educates people to increase awareness of disability, reduce stigma and lay the groundwork for more disability-aware designers to benefit people in the future with better products.
An activity that the UCL team ran in February 2024, for schools in East London, was a week-long inclusive ICT “Digital Skills and Technology Innovation” bootcamp. They invited students in Year 9 and above to learn about 3D printing, 3D modelling, laser cutting, AI and machine learning using Python, artificial reality and virtual reality experiences along with a chance to visit Google’s Accessible Discovery Centre and use their skills to “tackle real-world challenges”.
What are some examples of Assistive Technology?
Screen-reading software can help blind or visually impaired people by reading aloud the words on the page. This is something that can help sighted people too, your document can read itself to you while you do something else. The entire world of audio books exists for this reason! D/deaf people can take part more easily in Zoom conversations if text-to-caption software is available so they can read what’s being said. That can also help those whose hearing is fine but who speak a different language and might miss some words. Similarly you can dictate your clever ideas to your computer and device which will type it for you. This can be helpful for someone with limited use of their hands, or just someone who’d rather talk than type – this might also explain the popularity of devices and tools like Alexa or Siri.
Web designers want to (and may need to*) make their websites accessible to all their visitors. You can help too – a simple thing that you can do is to add ALT Text (alternative text) to images. If you ever share an image or gif to social media it’s really helpful to describe what’s in the image for screen readers so that people who can’t view it can still understand what you meant.
*Thanks to regulations that were adopted in 2018 the designers of public sector websites (e.g. government and local council websites where people pay their council tax or apply for benefits) must make sure that their pages meet certain accessibility standards because “people may not have a choice when using a public sector website or mobile app, so it’s important they work for everyone. The people who need them the most are often the people who find them hardest to use”.
“DIX puts disability front and center in the design process, and in so doing aims to create accessible, creative new HCI solutions that will be better for everyone“
You might have come across UI (User Interface(s)) and UX (User Experience), DIX is Disability Interaction – how disabled people use various tech.
Careers
Examples of computer science and disability-related jobs
Both of the jobs listed below are CLOSED and are just for your information only.
[CLOSED] Islington Council, Digital Accessibility Apprentice (f/t), £24k, clos 7 July
Are you interested in web design and do you want to help empower disabled people to become fully engaged within the community? This is a great opportunity to learn about the rapidly growing digital accessibility industry. Qualified and experienced digital accessibility specialists are sought after.
This role is focused on maximising comms-based engagement across the GDI Hub’s portfolio, supporting GDI Hub’s growing outreach across project-based deliverables and organisational comms channels (e.g. social media, websites, content generation).
We’re occasionally asked by school pupils, their parents and teachers about where young people can find out about work experience in something to do with computer science. We’ve put together some general information which we hope is helpful, and there’s also information further down the page that might be useful for people who’ve finished a computing degree and are wondering “What’s next?”.
Work experience for school students
(This section was originally published on our website for teachers – Teaching London Computing).
Supermarkets – not in the store but in the office, learning about inventory software used to manage stock for in-store shopping as well as online shopping (e.g. Ocado etc).
Shops – more generally pretty much every shop has an online presence and may want to display items for sale (perhaps also using software to handle payment).
Websites – someone who’s a blacksmith might not use a computer in their work directly, but the chances are they’d want to advertise their metal-flattening skills to a wider audience which is only really possible with a web presence.
Websites involve technical aspects (not necessarily Python types of things but certainly HTML and CSS / JavaScript) but also making websites accessible for users with visual impairments, e.g. labelling elements helpfully and remembering to add ALT TEXT for users of screenreaders. Technical skills are important but thinking about the end-user is super-important too, and often a skill that people pick up ‘on the job’ rather than being trained about (though that is changing).
Usability – making websites or physical products (e.g. home appliances, cameras, phones, printers, microwaves) easier to use by finding out how easily users can interact with them and considering options for improvement. For computing systems this involves HCI (human-computer interaction) and UX (user experience – e.g. how frustrating is a website?).
Transport – here in London we have buses with a GPS transponder that emits a signal which is picked up by sensors, co-ordinated and translated into real-time information about the whereabouts of various buses on the system. Third-party apps can also use some of this data to provide a service for people who want to know the quickest route to a particular place.
Council services – it’s possible to pay parking fines, council tax and other things online, also utility company bills. The programs involved here need to keep people’s private data secure as well.
Banks – are heavy users of ‘fintech’ (financial technology) and security systems, though that might preclude them taking on people in a work experience setting. Similarly GP surgeries have dedicated IT systems (such as EMIS) for handling confidential patient information and appointments. Even if they can’t take on tech work experience students they may have other work experience opportunities.
Places that offer (or have previously offered) work experience
TechDevJobs website Our ‘jobs in computing’ resource (homepage) should give you an idea of the different sectors which employ all sorts of computer scientists to do all sorts of different things (see the list of jobs organised by sector). There are about 70 80 jobs there so far; it doesn’t cover everything though (that’s almost an impossible task!).
There are obvious computing-related jobs such as a software company looking for a software developer but there’s also a job for a lawyer-researcher (someone who is able to practise as a lawyer if necessary but is going to be doing research) into Cloud Computing. For example there are all sorts of regulatory aspects to computing, some currently under consideration by the UK Government on data leaks, privacy, appropriateness of use and how securely information is stored, and what penalties there are for misuse.
Possibly a local law firm is doing some work in this area and might be open to offering work experience.
Other resources for recent graduates
The TechDev Jobs website (listed above in Other resources) is a great place to start. The jobs ‘advertised’ are usually closed but the collection lists several organisations that are currently employing people in the field of computer science (in the widest sense) and we are adding more all the time. Finding out about jobs is also about finding out about different sectors, some of which you might not have heard of yet – but they are all potential sources of jobs for people with computing skills.
Recent graduates or soon-to-graduate students may be able to help newer students get to grips with things in the Year 1 modules. Sometimes it’s not the computer science and programming that they or the lecturers need assistance with but really practical stuff like logging on and finding the relevant resources.
The Find A Job website from DWP (https://findajob.dwp.gov.uk/search) can be filtered by location and keyword too. Put in a keyword and see what pops up, then filter by salary etc.
Further study: if you’re interested in continuing your studies you might consider a Masters degree (MSc) in computer science and see the panel below for information on studying for a PhD, for which you are usually paid.
The Entry Level Games site isn’t a jobs board but if you’re interested in games design then it gives you a really helpful overview of some of the typical roles, what’s needed to do those roles and information from people who’ve done those jobs.
If you are interested in creating assistive technology or making computing more inclusive you might be interested in the work of the Global Disability Innovation Hub.
Networking is also a good idea to build up contacts and hear about different roles, some people find LinkedIn useful as an online version of networking and as a great place to hear about newly-opened vacancies. You can also take part in local hackathons, or volunteer at code clubs etc. This sort of thing is useful for your CV too.
There are probably organisations near you and it’s fairly likely that they’ll be using computers in one way or another, and you might be useful to them. Open up Google Maps and navigate to where you’re living, then zoom in and see what organisations are nearby. Make a note of them and if they have a vacancies page save that link in a document so that you can visit it every so often and see if a relevant new job has been added. Or contact them speculatively with your CV.
If you have a Gmail account you can set up Google Alerts. Whenever a new web page (e.g. a new job vacancy is published) that satisfies your search criteria you’ll get a daily email with a summary of what’s been added and the link to find out more. This is a way of bringing the job adverts to you!
Digitally stitching together 2D photographs to visualise the 3D world
Composite image of one green glass bottle made from three photographs. Image by Jo Brodie
Imagine you’re the costume designer for a major new film about a historical event that happened 400 years ago. You’d need to dress the actors so that they look like they’ve come from that time (no digital watches!) and might want to take inspiration from some historical clothing that’s being preserved in a museum. If you live near the museum, and can get permission to see (or even handle) the material that makes it a bit easier but perhaps the ideal item is in another country or too fragile for handling.
This is where 3D imaging can help. Photographs are nice but don’t let you get a sense of what an object is like when viewed from different angles, and they don’t really give a sense of texture. Video can be helpful, but you don’t get to control the view. One way around that is to take lots of photographs, from different angles, then ‘stitch’ them together to form a three dimensional (3D) image that can be moved around on a computer screen – an example of this is photogrammetry.
In the (2D) example above I’ve manually combined three overlapping close-up photos of a green glass bottle, to show what the full size bottle actually looks like. Photogrammetry is a more advanced version (but does more or less the same thing) which uses computer software to line up the points that overlap and can produce a more faithful 3D representation of the object.
In the media below you can see a looping gif of the glass bottle being rotated first in one direction and then the other. This video is the result of a 3D ‘scan’ made from only 29 photographs using the free software app Polycam. With more photographs you could end up with a more impressive result. You can interact with the original scan here – you can zoom in and turn the bottle to view it from any angle you choose.
A looping gif of the 3D Polycam file being rotated one way then the other. Image by Jo Brodie
You might walk around your object and take many tens of images from slightly different viewpoints with your camera. Once your photogrammetry software has lined the images up on a computer you can share the result and then someone else would be able to walk around the same object – but virtually!
Photogrammetry is being used by hobbyists (it’s fun!) but is also being used in lots of different ways by researchers. One example is the field of ‘restoration ecology’ in particular monitoring damage to coral reefs over time, but also monitoring to see if particular reef recovery strategies are successful. Reef researchers can use several cameras at once to take lots of overlapping photographs from which they can then create three dimensional maps of the area. A new project recently funded by NERC* called “Photogrammetry as a tool to improve reef restoration” will investigate the technique further.
Photogrammetry is also being used to preserve our understanding of delicate historic items such as Stuart embroideries at The Holburne Museum in Bath. These beautiful craft pieces were made in the 1600s using another type of 3D technique. ‘Stumpwork’ or ‘raised embroidery’ used threads and other materials to create pieces with a layered three dimensional effect. Here’s an example of someone playing a lute to a peacock and a deer.
“Satin worked with silk, chenille threads, purl, shells, wood, beads, mica, bird feathers, bone or coral; detached buttonhole variations, long-and-short, satin, couching, and knot stitches; wood frame, mirror glass, plush”, 1600s. Photo CC0 from Metropolitan Museum of Art uploaded by Pharos on Wikimedia.
Using photogrammetry (and other 3D techniques) means that many more people can enjoy, interact with and learn about all sorts of things, without having to travel or damage delicate fabrics, or corals.
*NERC (Natural Environment Research Council) and AHRC (Arts and Humanities Research Council) are two organisations that fund academic research in universities. They are part of UKRI (UK Research & Innovation), the wider umbrella group that includes several research funding bodies.
Other uses of photogrammetry
Examples of cultural heritage and ecology are highlighted in the post but also interactive games (particularly virtual reality), engineering and crime scene forensics and the film industry use photogrammetry, an example is Mad Max: Fury Road which used the technique to create a number of its visual effects. Hobbyists also create 3D versions (called ‘3D assets’) of all sorts of objects and sell these to games designers to include in their games for players to interact with.
What is photogrammetry? (12 November 2021) Great Barrier Reef Foundation “What it is, why we’re using it and how it’s helping uncover the secrets of reef recovery and restoration.” [EXTERNAL]
Even if you’re the best keyboard player in the world the sound you can get from any one key is pretty much limited to ‘loud’ or ‘soft’, ‘short’ or ‘long’ depending on how hard and how quickly you press it. The note’s sound can’t be changed once the key is pressed. At best, on a piano, you can make it last longer using the sustain pedal. A violinist, on the other hand, can move their finger on the string while it’s still being played, changing its pitch to give a nice vibrato effect. Wouldn’t it be fun if keyboard players could do similar things.
Andrew McPherson and other digital music researchers at QMUL and Drexel University came up with a way to give keyboard performers more room to express themselves like this. TouchKeys is a thin plastic coating, overlaid on each key of a keyboard, but barely noticeable to the keyboard player. The coating contains sensors and electronics that can change the sound when a key is touched. The TouchKeys’ electronics connect to the keyboard’s own controller and so changes the sounds already being made, expanding the keyboard’s range. This opens up a whole world of new sonic possibilities to a performer.
The sensors can follow the position and movement of your fingers and respond appropriately in real-time, extending the range of sounds you can get from your keyboard. By wiggling your finger from side-to-side on a key you can make a vibrato effect, or you change the note’s pitch completely by sliding your finger up and down the key. The technology is similar to a phone’s touchscreen where different movements (‘gestures’) make different things happen. An advantage of the system is that it can easily be applied to a keyboard a musician already knows how to play, so they’ll find it easy to start to use without having to make big changes to their style of playing.
They wanted to get TouchKeys out of the lab and into the hands of more musicians, so teamed up with members of London’s Music Hackspace community, who run courses in electronic music, to create some initial versions for sale. Early adopters were able to choose either a DIY kit to add to their own keyboard, wire up and start to play, or choose a ready-to-play keyboard with the TouchKeys system already installed.
The result is that lots of musicians are already using TouchKeys to get more from their keyboard in exciting new ways.
Jo Brodie and Paul Curzon, Queen Mary University of London
Earlier this year Professor Andrew McPherson gave his inaugural lecture (a public lecture given by an academic who has been promoted) at Imperial College London where he is continuing his research. Watch his lecture.
World Emoji Day is celebrated on the 17th of July every year (why?) and so we’ve put together a ‘Can you guess the film from the emoji’ quiz and added some emoji-themed articles about computer science and the history of computing.
An emoji film quiz
Emoji accessibility, and a ‘text version’ of the quiz
Computer science articles about emoji
Emoji are small digital pictures that behave like text – you can slot them easily them in sentences (you don’t have to ‘insert an image’ from a file or worry about the picture pushing the text out of the way). You can even make them bigger or smaller with the text (🎬 – compare the one in the section title below). People use them as a quick way of sharing a thought or emotion, or adding a comment like a thumbs up so they’re (sort of) a form of data representation. Even so, communication with emoji can be just as tricky, in terms of being misunderstood, just as with using words alone. Different age groups might read the same emoji and understand something quite different from it. What do you think 🙂 (‘slightly smiling face’ emoji) means? What do people older or younger than you think it means? Lots of people think it means “I’m quite happy about this” but others use it in a more sarcastic way.
1. An emoji film quiz 🎬
You can view the quiz online or download and print from Word or PDF versions. If you’re in a classroom with a projector the PowerPoint file is the one you want.
2. Emoji accessibility, and a text version of the quiz
We’ve included a text version for blind or visually impaired people which can either be read out by someone or by a screen reader. Use the ‘Text quiz’ files in Word or PDF above.
More generally, when people share photographs and other images on social media it’s helpful if they add some information about the image to the ‘Alt Text’ (alternative text) box. This tells people who can’t easily see the image what’s in the picture. Screenreaders will also tell people what the emojis are in a tweet or text message, but if you use too many… it might sound like this 😬.
This next article is about the history of computing and the development of the graphical icons for apps that started life being drawn on gridded paper by Susan Kare. You could print some graph / grid paper and design your own!
In 1977 NASA scientists at the Jet Propulsion Laboratory launched the interstellar probe Voyager 1 into space – and it just keeps going. It has now travelled 15 BILLION miles (24 billion kilometres), which is the furthest any human-made thing has ever travelled from Earth. It communicates with us here on Earth via radiowaves which can easily cross that massive distance between us. But even travelling at the speed* of light (all radiowaves travel at that speed) each radio transmission takes 22.5 hours, so if NASA scientists send a command they have to wait nearly two days for a response. (The Sun is ‘only’ 93 million miles away from Earth and its light takes about 8 minutes to reach us.)
FDS – The Flight Data System
The Voyager 1 probe has sensors to detect things like temperature or changes in magnetic fields, a camera to take pictures and a transmitter to send all this data back to the scientists on Earth. One of its three onboard computers (the Flight Data System, or FDS) takes that data, packages it up and transmits it as a stream of 1s and 0s to the waiting scientists back home who decode it. Voyager 1 is where it is because NASA wanted to send a probe out beyond the limits of our Solar System, into ‘interstellar space’ far away from the influence of our Sun to see what the environment is like there. It regularly sends back data updates which include information about its own health (how well its batteries are doing etc) along with the scientific data, packaged together into that radio transmission. NASA can also send up commands to its onboard computers too. Computers that were built in 1977!
The pale blue dot
‘The Pale Blue Dot’. In the thicker apricot-coloured band on the right you might be able to see a tiny dot about halfway down. That’s the Earth! Full details of this famous 1990 photo here.
Although its camera is no longer working its most famous photograph is this one, the Pale Blue Dot, a snapshot of every single person alive on the 14th of February 1990. However as Voyager 1 was 6 billion miles from home by then when it looked back at the Earth to take that photograph you might have some difficulty in spotting anyone! But they’re somewhere in there, inside that single pixel (actually less than a pixel!) which is our home.
As Voyager 1 moved further and further away from our own planet, visiting Jupiter and Saturn before travelling to our outer Solar System and then beyond, the probe continued to send data and receive commands from Earth.
The messages stopped making sense
All was going well, with the scientists and Voyager 1 ‘talking’ to one another, until November 2023 when the binary 1s and 0s it normally transmitted no longer had any meaningful pattern to them, it was gibberish. The scientists knew Voyager 1 was still ‘alive’ as it was able to send that signal but they didn’t know why its signal no longer made any sense. Given that the probe is nearly 50 years old and operating in a pretty harsh environment people wondered if that was the natural end of the project, but they were determined to try and re-establish normal contact with the probe if they could.
Searching for a solution
They pored over almost-50 year old paper instruction manuals and blueprints to try and work out what was wrong and it seemed that the problem lay in the FDS. Any scientific data being collected was not being correctly stored in the ‘parcel’ that was transmitted back to Earth, and so was lost – Voyager 1 was sending empty boxes. At that distance it’s too far to send an engineer up to switch it off and on again so instead they sent a command to try and restart things. The next message from Voyager 1 was a different string of 1s and 0s. Not quite the normal data they were hoping for, but also not entirely gibberish. A NASA scientist decoded it and found that Voyager 1 had sent a readout of the FDS’ memory. That told them where the problem was and that a damaged chip meant that part of its memory couldn’t be properly accessed. They had to move the memory from the damaged chip.
That’s easier said than done. There’s not much available space as the computers can only store 68 kilobytes of data in total (absolutely tiny compared to today’s computers and devices). There wasn’t one single place where NASA scientists could move the memory as a single block, instead they had to break it up into pieces and store it in different places. In order to do that they had to rewrite some of the code so that each separated piece contained information about how to find the next piece. Imagine if a library didn’t keep a record of where each book was, it would make it very hard to find and read the sequel!
Earlier this year NASA sent up a new command to Voyager 1, giving it instructions on how to move a portion of its memory from the damaged area to its new home(s) and waited to hear back. Two days later they got a response. It had worked! They were now receiving sensible data from the probe.
Voyager team celebrates engineering data return, 20 April 2024 (NASA/JPL-Caltech). “Shown are Voyager team members Kareem Badaruddin, Joey Jefferson, Jeff Mellstrom, Nshan Kazaryan, Todd Barber, Dave Cummings, Jennifer Herman, Suzanne Dodd, Armen Arslanian, Lu Yang, Linda Spilker, Bruce Waggoner, Sun Matsumoto, and Jim Donaldson.”
For a while they it was just basic ‘engineering data’ (about the probe’s status) but they knew their method worked and didn’t harm the distant traveller. They also knew they’d need to do a bit more work to get Voyager 1 to move more memory around in order for the probe to start sending back useful scientific data, and…
Success!
… …in May, NASA announced that scientific data from two of Voyager 1’s instruments was finally being sent back to Earth and in June the probe was fully operational. You can follow Voyager 1’s updates on Twitter / X via @NASAVoyager.
Did you know?
Both Voyager 1 and Voyager 2 carry with them a gold-plated record called ‘The Sounds of Earth‘ containing “sounds and images selected to portray the diversity of life and culture on Earth”. Hopefully any aliens encountering it will have a record player (but the Voyager craft do carry a spare needle!) Credit: NASA/JPL
References
Lots of articles helped in the writing of this one and you can download a PDF of them here. Featured image credit showing the Voyager spacecraft: NASA/JPL.
*radiowaves and light are part of the electromagnetic or ‘EM’ spectrum along with microwaves, gamma rays, X-rays, ultraviolet and infra red. All these waves travel at the same speed in a vacuum, the speed of light (300,000,000 metres per second, sometimes written as 3 x 108 m/s or (m s-1)), but the waves differ by their frequency and wavelength.
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
EPSRC supports this blog through research grant EP/W033615/1.