Kimberly Bryant: Black Girls Code

Kimberly Bryant in 2016, Ståle Grut / nrkbeta, CC BY-SA 2.0, via Wikimedia Commons

Kimberly Bryant was born on 14 January 1967 in Memphis, Tennessee and was enthusiastic about maths and science in school, describing herself as a ‘nerdy girl’. She was awarded a scholarship to study Engineering at university but while there she switched to Electrical Engineering with Computer Science and Maths. During her career she has worked in several industries including pharmaceutical, biotechnology and energy.

She is most known though for founding Black Girls Code. In 2011 her daughter wanted to learn computer programming but nearly all the students on the nearest courses were boys and there were hardly any African American students enrolled. Kimberly didn’t want her daughter to feel isolated (as she herself had felt) so she created Black Girls Code (BGC) to provide after-school and summer school coding lessons for African American girls. BGC has a goal of teaching one million Black girls to code by 2040 and every year thousands of girls learn coding with their peers.

She has received recognition for her work and was given the Jefferson Award for Community Service for the support she offered to girls in her local community, and in 2013 Business Insider included her on its list of The 25 Most Influential African-Americans in Technology. When Barack Obama was the US President the White House website honoured her as one of its eleven Champions of Change in Tech Inclusion – Americans who are “doing extraordinary things to expand technology opportunities for young learners – especially minorities, women and girls, and others from communities historically underserved or underrepresented in tech fields.”

More on …


EPSRC supports this blog through research grant EP/W033615/1.

Bringing people closer when they’re far away

Two children playing with a tin-can telephone, which lets them talk to each other at a distance. Picture credit Jerry Loick KONZI, CC BY-SA 4.0, via Wikimedia Commons

This article was written before the Covid pandemic led to many more of us keeping in touch from a distance…

Living far away from the person you love is tough. You spend every day missing their presence. The Internet can help, and many couples in long-distance relationships use video chat to see more of each other. It’s not the same as being right there with someone else, but couples find ways to get as much connection as they can out of their video chats. Some researchers in Canada, at the University of Calgary and Simon Fraser University, interviewed couples in long-distance relationships to find out how they use video chat to stay connected.

Nice to see you

The first thing that the researchers found is perhaps what you might expect. Couples use video chat when it’s important to see each other. You can text little messages like ‘I love you’ to each other, or send longer stories in an email, and that’s fine. But seeing someone’s face when they’re talking to you feels much more emotionally close. One member of a couple said, “The voice is not enough. The relationship is so physical and visual. It’s not just about hearing and talking.” Others reported that seeing each other’s face helped them know what the other person was feeling. For one person, just seeing his partner’s face when she was feeling worn out helped him understand her state of mind. In other relationships, seeing one another helped avoid misunderstandings that come from trying to interpret tone of voice. Plus, having video helped couples show off new haircuts or clothes, or give each other tours of their surroundings.

Hanging out on video

The couples in the study didn’t use video chat just to have conversations. They also used it in a more casual way: to hang out with each other while they went about their lives. Their video connections might stay open for hours at a time while they did chores, worked, read, ate or played games. Long silences might pass. Couples might not even be visible to each other all the time. But each partner would, every once in a while, check back at the video screen to see what the other was up to. This kind of hanging out helped couples feel the presence of the other person, even if they weren’t having a conversation. One participant said of her partner, “At home, a lot of times at night, he likes to put on his PJs and turn out all the lights and sit there with a snack and, you know, watch TV… As long as you can see the form of somebody that’s a nice thing. I think it’s just the comfort of knowing that they’re there.”

Some couples felt connected by doing the same things together in different places. They shared evenings together in living rooms far away from each other, watching the same thing on television or even getting the same movie to watch and starting it at the same time. Some couples had dinner dates where they ordered the same kind of takeaway and ate it with each other through their video connection.

Designing to connect

This might not sound like research about human-computer interaction. It’s about the deepest kind of human interaction. But good computer design can help couples feel as connected as possible. The researchers also wanted to find out how they could help couples make their video chats better. Designers of the future might think about how to make gadgets that make video chat easier to do while getting on with other chores. It’s difficult to talk, film yourself, cook and move through the house all at the same time. What’s more, today’s gadgets aren’t really built to go everywhere in the house. Putting a laptop in a kitchen or propping one up in a bed doesn’t always work so well. The designers of operating systems need to work out how to do other stuff at the same time as video. If couples want to have a video chat connection open for hours, sometimes they might need to browse the web or write a text message at the same time. And what about couples who like to fall asleep next to one another? They might need night-vision cameras so they can see their partner without disturbing their sleep.

We’re probably going to have more long- distance relationships in the future. Easy, cheap travel makes it easier to move to faraway places. You can go to university abroad, and join a company with offices on every continent. It’s an awfully good thing that technology is making it easier to stay connected with the people who are important too. Video chat is not nearly as good as feeling your lover’s touch, but when you really miss someone, even watching them do chores helps.

Paul Curzon, Queen Mary University of London


Related Magazine …

All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com/

EPSRC supports this blog through research grant EP/W033615/1.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Hedy Lamarr: The movie star, the piano player and the torpedo

Hedy Lamarr
eBay, Public domain, via Wikimedia Commons
Image credit: eBay, Public domain, via Wikimedia Commons

Hedy Lamarr was a movie star. Back in the 1940’s, in Hollywood’s Golden Age, she was considered one of the screen’s most beautiful women and appeared in several blockbusters. But Hedy was more than just good looks and acting skills. Even though many people remembered Hedy for her pithy quote “Any girl can be glamorous. All she has to do is stand still and look stupid”, at the outbreak of World War 2 she and composer George Antheil invented an encryption technique for a torpedo radio guidance system!

Their creative idea for an encryption system was based on the mechanism behind the ‘player piano’ – an automatic piano where the tune is controlled by a roll of paper with punched holes. The idea was to use what is now known as ‘frequency hopping’ to overcome the possibility of the control signal being jammed by the enemy. Normal radio communication involves the sender picking a radio frequency and then sending all communication at that frequency. Anyone who tunes in to that frequency can then listen in, but also jam it by sending their own more powerful signal at the same frequency. That’s why non-digital radio stations constantly tell you their frequency “96.2 FM” or whatever. Frequency hopping involves jumping from frequency to frequency throughout the broadcast. Then, only if sender and receiver share the secret of exactly when the jumps will be made, and to what frequencies, can the receiver pick up the broadcast or jam it. That is essentially what the piano roll could do. It stored the secret.

Though the navy didn’t actually use the method during World War II, they did use the principles during the Cuban missile crisis in the 1960’s. The idea behind the method is also used in today’s GPS, Wi-Fi, Bluetooth and mobile phone technologies, underpinning so much of the technology of today. In 2014 she was inducted into the US national inventor’s hall of fame.

Peter W McOwan, Queen Mary University of London (from the archive)


Click the player below or download the audio file (.m4a) to listen to this article read by Jo Brodie.


More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Gary Starkweather: the laser printer and colour management

Gary Starkweather, born 9 January 1938, invented and developed the first laser printer. In the late 1960s he was an engineer, with a background in optics, working in the US for the Xerox company (famous for their photocopiers) and came up with the idea of using a laser beam to transfer the image to the photocopier (so that it could make lots of copies), speeding up the process of printing documents. There is a video of Gary talking about the ‘Eureka moment’ of his invention here. Laser printers are now found in offices worldwide – you may even have one at home.

He also invented colour management which is a way of ensuring that a shade of blue colour on your computer’s or phone’s screen looks the same on a TV screen or when printed out. Different devices have different display colours so ‘red’ on one device might not be the same as ‘red’ on another. Colour management is something that happens in devices behind the scenes and which translates the colour instruction from one device to produce the closest match on another. There is an International Color Consortium (ICC) which helps different device manufacturers ensure that colour is “seamless between devices and documents”.

Starkweather received an Academy Award (an Oscar) for Technical Achievement in 1994, for the work he’d done in colour film scanning. That involves taking a strip of film and converting it digitally so it can be edited on a computer.

As a nice coincidence, on the same day as his birth, in 2007, the first Apple iPhone was announced (though not available until June that year)… and all iPhones use colour management!

– Jo Brodie, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Lynn Conway: revolutionising chip design

Lynn Conway photo
Lynn Conway:
Photo from wikimedia by Charles Rogers CC BY-SA 2.5

MIT professor and transgender activist, Lynn Conway along with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!

A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.

This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.

This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.

Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.

Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.

Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Sorry to bug you: Grace Hopper

Close up head on of a flying butterfly or moth
Image by Josch13 from Pixabay

In the 2003 film The Matrix Reloaded, Neo, Morpheus, Trinity and crew continue their battle with the machines that have enslaved the human race in the virtual reality of the Matrix. To find the Oracle, who can explain what’s going on (which, given the twisty plot in the Matrix films, is always a good idea), Trinity needs to break into a power station and switch off some power nodes so the others can enter the secret floor. The computer terminal displays that she is disabling 27 power nodes, numbers 21 to 48. Unfortunately, that’s actually 28 nodes, not 27! A computer that can’t count and shows the wrong message!

Sadly, there are far too many programs with mistakes in them. These mistakes are known as bugs because back in 1945 Grace Hopper, one of the female pioneers of computer science, found an error caused by a moth trapped between the points at Relay 70, Panel F, of the Mark II Aiken Relay Calculator being tested at Harvard University. She removed the moth, and attached it to her test logbook, writing ‘First actual case of bug being found’, and so popularised the term ‘debugging’ for testing and fixing a computer program.

Grace Hopper is famous for more than just the word ‘bug’ though. She was one of the most influential of the early computer pioneers, responsible for perhaps the most significant idea in helping programmers to write large, bug-free programs.

As a Lieutenant in the US Navy reserves, having volunteered after Pearl Harbor, Grace was one of three of the first programmers of Harvard’s IBM Mark I computer. It was the first fully automatic programmed computer.

She didn’t just program those early computers though, she came up with innovations in the way computers were programmed. The programs for those early computers all had to be made up of so-called ‘machine instructions’. These are the simplest operations the computer can do: such as to add two numbers, move data from a place in memory to a register (a place where arithmetic can be done in a subsequent operation), jump to a different instruction in the program, and so on.

Programming in such basic instructions is a bit like giving someone directions to the station but having to tell them exactly where to put their foot for every step. Grace’s idea was that you could write programs in a language closer to human language where each instruction in this high-level language stood for lots of the machine instructions – equivalent to giving the major turns in those directions rather than every step.

The ultimate result was COBOL: the first widely used high-level programming language. At a stroke her ideas made programming much easier to do and much less error-prone. Big programs were now a possibility.

For this idea of high-level languages to work though you needed a way to convert a program written in a high-level language like COBOL into those machine instructions that a computer can actually do. It can’t fill in the gaps on its own! Grace had the answer – the ‘compiler’. It is just another computer program, but one that does a specialist task: the conversion. Grace wrote the first ever compiler, for a language called A-O, as well as the first COBOL compiler. The business computing revolution was up and running.

High-level languages like COBOL have allowed far larger programs to be written than is possible in machine-code, and so ultimately the expansion of computers into every part of our lives. Of course even high-level programs can still contain mistakes, so programmers still need to spend most of their time testing and debugging. As the Oracle would no doubt say, “Check for moths, Trinity, check for moths”.

– Peter W McOwan and Paul Curzon, Queen Mary University of London (from the archive)


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

The first computer music

Robot with horn
Robot Image by www_slon_pics from Pixabay

The first recorded music by a computer program was the result of a flamboyant flourish added on the end of a program that played draughts in the early 1950s. It played God Save the King.

The first computers were developed towards the end of the second world war to do the number crunching needed to break the German codes. After the War several groups set about manufacturing computers around the world: including three in the UK. This was still a time when computers filled whole rooms and it was widely believed that a whole country would only need a few. The uses envisioned tended to be to do lots of number crunching.

A small group of people could see that they could be much more fun than that, with one being school teacher Christopher Strachey. When he was introduced to the Pilot ACE computer on a visit to the National Physical Laboratories, in his spare time he set about writing a program that could play against humans at draughts. Unfortunately, the computer didn’t have enough memory for his program.

He knew Alan Turing, one of those war time pioneers, when they were both at university before the War. He luckily heard that Turing, now working at the University of Manchester, was working on the new Feranti Mark I computer which would have more memory, so wrote to him to see if he could get to play with it. Turing invited him to visit and on the second visit, having had a chance to write a version of the program for the new machine, he was given the chance to try to get his draughts program to work on the Mark I. He was left to get on with it that evening.

He astonished everyone the next morning by having the program working and ready to demonstrate. He had worked through the night to debug it. Not only that, as it finished running, to everyone’s surprise, the computer played the National Anthem, God Save the King. As Frank Cooper, one of those there at the time said: “We were all agog to know how this had been done.” Strachey’s reputation as one of the first wizard programmers was sealed.

The reason it was possible to play sounds on the computer at all, was nothing to do with music. A special command called ‘Hoot’ had been included in the set of instructions programmers could use (called the ‘order’ code at the time) when programming the Mark I computer. The computer was connected to a loud speaker and Hoot was used to signal things like the end of the program – alerting the operators. Apparently it hadn’t occurred to anyone there but Strachey that it was everything you needed to create the first computer music.

He also programmed it to play Baa Baa Black Sheep and went on to write a more general program that would allow any tune to be played. When a BBC Live Broadcast Unit visited the University in 1951 to see the computer for Children’s Hour the Mark I gave the first ever broadcast performance of computer music, playing Strachey’s music: the UK National Anthem, Baa Baa Black Sheep and also In the Mood.

While this was the first recorded computer music it is likely that Strachey was beaten to creating the first actual programmed computer music by a team in Australia who had similar ideas and did a similar thing probably slightly earlier. They used the equivalent hoot on the CSIRAC computer developed there by Trevor Pearcey and programmed by Geoff Hill. Both teams were years ahead of anyone else and it was a long time before anyone took the idea of computer music seriously.

Strachey went on to be a leading figure in the design of programming languages, responsible for many of the key advances that have led to programmers being able to write the vast and complex programs of today.

The recording made of the performance has recently been rediscovered and restored so you can now listen to the performance yourself (see below).

Paul Curzon, Queen Mary University of London

(updated from the archive)


Listen …

More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Swat a way to drive

Flies are small, fast and rather cunning. Try to swat one and you will see just how efficient their brain is, even though it has so few brain cells that each one of them can be counted and given a number. A fly’s brain is a wonderful proof that, if you know what you’re doing, you can efficiently perform clever calculations with a minimum of hardware. The average household fly’s ability to detect movement in the surrounding environment, whether it’s a fly swat or your hand, is due to some cunning wiring in their brain.

Speedy calculations

Movement is measured by detecting something changing position over time. The ratio distance/time gives us the speed, and flies have built in speed detectors. In the fly’s eye, a wonderful piece of optical engineering in itself with hundreds of lenses forming the mosaic of the compound eye, each lens looks at a different part of the surrounding world, and so each registers if something is at a particular position in space.

All the lenses are also linked by a series of nerve cells. These nerve cells each have a different delay. That means a signal takes longer to pass along one nerve than another. When a lens spots an object in its part of the world, say position A, this causes a signal to fire into the nerve cells, and these signals spread out with different delays to the other lenses’ positions.

The separation between the different areas that the lenses view (distance) and the delays in the connecting nerve cells (time) are such that a whole range of possible speeds are coded in the nerve cells. The fly’s brain just has to match the speed of the passing object with one of the speeds that are encoded in the nerve cells. When the object moves from A to B, the fly knows the correct speed if the first delayed signal from position A arrives at the same time as the new signal at position B. The arrival of the two signals is correlated. That means they are linked by a well-defined relation, in this case the speed they are representing.

Do locusts like Star Wars?

Understanding the way that insects see gives us clever new ways to build things, and can also lead to some bizarre experiments. Researchers in Newcastle showed locusts edited highlights from the original movie Star Wars. Why you might ask? Do locusts enjoy a good Science Fiction movie? It turns out that the researchers were looking to see if locusts could detect collisions. There are plenty of those in the battles between X-wing fighters and Tie fighters. They also wanted to know if this collision detecting ability could be turned into a design for a computer chip. The work, part-funded by car-maker Volvo, used such a strange way to examine locust’s vision that it won an Ig Nobel award in 2005. Ig Noble awards are presented each year for weird and wonderful scientific experiments, and have the motto ‘Research that makes people laugh then think’. You can find out more at http://improbable.com

Car crash: who is to blame?

So what happens if we start to use these insect ‘eye’ detectors in cars, building

We now have smart cars with the artificial intelligence (AI) taking over from the driver completely or just to avoid hitting other things. An interesting question arises. When an accident does happen, who is to blame? Is it the car driver: are they in charge of the vehicle? Is it the AI to blame? Who is responsible for that: the AI itself (if one day we give machines human-like rights), the car manufacturer? Is it the computer scientists who wrote the program? If we do build cars with fly or locust like intelligence, which avoid accidents like flies avoid swatting or can spot possible collisions like locusts, is it the insect whose brain was copied that is to blame!?!What will insurance companies decide? What about the courts?

As computer science makes new things possible, society quickly needs to decide how to deal with them. Unlike the smart cars, these decisions aren’t something we can avoid.

by Peter W McOwan, Queen Mary University of London (updated from the archive)


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Large robot facing a man in his home
Robot at home Image by Meera Patil from Pixabay

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

The last speaker

by Paul Curzon, Queen Mary University of London

(from the cs4fn archive)

The languages of the world are going extinct at a rapid rate. As the numbers of people who still speak a language dwindle, the chance of it surviving dwindles too. As the last person dies, the language is gone forever. To be the last living speaker of the language of your ancestors must be a terribly sad ordeal. One language’s extinction bordered on the surreal. The last time the language of the Atures, in South America was heard, it was spoken by a parrot: an old blue-and-yellow macaw, that had survived the death of all the local people.

Why do languages die?

The reason smaller languages die are varied, from war and genocide, to disease and natural disaster, to the enticement of bigger, pushier languages. Can technology help? In fact global media: films, music and television are helping languages to die, as the youth turn their backs on the languages of their parents. The Web with its early English bias may also be helping to push minority languages even faster to the brink. Computers could be a force for good though, protecting the world’s languages, rather than destroying them.

Unicode to the rescue

In the early days of the web, web pages used the English alphabet. Everything in a computer is just stored as numbers, including letters: 1 for ‘a’, 2 for ‘b’, for example. As long as different computers agree on the code they can print them to the screen as the same letter. A problem with early web pages is there were lots of different encodings of numbers to letters. Worse still only enough numbers were set aside for the English alphabet in the widely used encodings. Not good if you want to use a computer to support other languages with their variety of accents and completely different sets of characters. A new universal encoding system called Unicode came to the rescue. It aims to be a single universal character encoding – with enough numbers allocated for ALL languages. It is therefore allowing the web to be truly multi-lingual.

Languages are spoken

Languages are not just written but are spoken. Computers can help there, too, though. Linguists around the world record speakers of smaller languages, understanding them, preserving them. Originally this was done using tapes. Now the languages can be stored on multimedia computers. Computers are not just restricted to playing back recordings but can also actively speak written text. The web also allows much wider access to such materials that can also be embedded in online learning resources, helping new people to learn the languages. Language translators such as BabelFish and Google Translate can also help, though they are still far from perfect even for common languages. The problem is that things do not translate easily between languages – each language really does constitute a different way of thinking, not just of talking. Some thoughts are hard to even think in a different language.

AI to the rescue?

Even that is not enough. To truly preserve a language, the speakers need to use it in everyday life, for everyday conversation. Speakers need someone to speak with. Learning a language is not just about learning the words but learning the culture and the way of thinking, of actively using the language. Perhaps future computers could help there too. A long-time goal of artificial intelligence (AI) researchers is to develop computers that can hold real conversations. In fact this is the basis of the original test for computer intelligence suggested by Alan Turing back in 1950…if a computer is indistinguishable from a human in conversation, then it is intelligent. There is also an annual competition that embodies this test: the Loebner Prize. It would be great if in the future, computer AIs could help save languages by being additional everyday speakers holding real conversations, being real friends.

Time is running out…
by the time the AIs arrive,
the majority of languages may be gone forever.

Too late?

The problem is that time is running out. Artificial intelligences that can have totally realistic human conversations even in English are still a way off. None have passed the Turing Test. To speak different languages really well for everyday conversations those AIs will have to learn the different cultures and ‘think’ in the different languages. The window of opportunity is disappearing. By the time the AIs arrive the majority of human languages may be gone forever. Let’s hope that computer scientists and linguists do solve the problems in time, and that computers are not used just to preserve languages for academic interest, but really can help them to survive. It is sad that the last living creature to speak Atures was a parrot. It would be equally sad if the last speakers of all current languages bar English, Spanish and Chinese say, were computers.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.