Susan Kare: Icon Draw

by Jo Brodie, Queen Mary University of London

(from the archive)

A pixel drawing of a dustbin icon

Pick up any computer or smart gadget and you’ll find small, colourful pictures on the screen. These ‘icons’ tell you which app is which. You can just touch them or click on them to open the app. It’s quick and easy, but it wasn’t always like that.

Up until the 1980s if you wanted to run a program you had to type a written command to tell the device what to do. This made things slow and hard. You had to remember all the different commands to type. It meant that only people who felt quite confident with computers were able to play with them.

Computer scientists wanted everyone to be able to join in (they wanted to sell more computers too!) so they developed a visual, picture-based way of letting people tell their computers what to do, instead of typing in commands. It’s called a ‘Graphical User Interface’ or GUI.

An artist, Susan Kare, was asked to design some very simple pictures – icons – that would make using computers easier. If people wanted to delete a file they would click on an icon with her drawing of a little dustbin. If people wanted to edit a letter they were writing they could click on the icon showing a pair of scissors to cut out a bit of text. She originally designed them on squared paper, with each square representing a pixel on the screen. Over the years the pictures have become more sophisticated (and sometimes more confusing) but in the early days they were both simple and clear thanks to Susan’s skill.

Try our pixel puzzles which use the same idea. Then invent your own icons or pixel puzzles. Can you come up with your own easily recognizable pictures using as few lines as possible?

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Cognitive crash dummies

by Paul Curzon, Queen Mary University of London

The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.

Don’t worry, the cognitive crash dummies are coming!

Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.

If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.

Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.

There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.

Send in the GOMS

Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.

To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.

GOMS in 60 seconds?

GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.

Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.

So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.


This article was originally published on the CS4FN website and appears on pages 12 and 13 of issue 9 (‘Programmed to save the world‘) of the CS4FN magazine, which you can download (free) here along with all of our other free material.

See also the concept of ‘digital twins’ in this article from our Christmas Advent Calendar: Pairs: mittens, gloves, pair programming, magic tricks.


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Devices that work for everyone #BlackHistoryMonth

A pulse oximeter on the finger of a Black person's hand

by Jo Brodie, Queen Mary University of London

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

The computer recognises Desi’s colleague Wanda, but not him

Another video (below), this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

The soap dispenser only dispenses soap if it ‘see’s a white hand

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

In the days when film was developed the technicians would use what was called a ‘Shirley’ card (a photograph of a White woman with brown hair) to colour-correct the photographs. The colour balancing meant darker-skinned tones didn’t come out as well, however the problem was only really addressed because chocolate manufacturers and furniture companies complained that the different chocolates and dark brown wood products weren’t showing up correctly!

The Racial Bias Built Into Photography (25 April 2019) The New York Times

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

A pulse oximeter on the finger of a Black person's hand
Pulse oximeter image by Mufid Majnun from Pixabay
The oximeter is shown on the index finger of a Black person’s right hand.

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

Further reading

See also Is your healthcare algorithm racist? (from issue 27 of the CS4FN magazine).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

The red sock of doom – trying to catch mistakes before they happen

Washing machine mistake

A red sock in with your white clothes wash – guess what happened next? What can you do to prevent it from happening again? Why should a computer scientist care? It turns out that red socks have something to teach us about medical gadgets.

How can we stop red socks from ever turning our clothes pink again? We need a strategy. Here are some possibilities.

  • Don’t wear red socks.
  • Take a ‘how to wash your clothes’ course.
  • Never make mistakes.
  • Get used to pink clothes.

Let’s look at them in turn – will they work?

Don’t wear red socks: That might help but it’s not much use if you like red socks or if you need them to match your outfit. And how would it help when you wear purple, blue or green socks? Perhaps your clothes will just turn green instead.

Take a ‘how to wash your clothes’ course: Training might help: you’d certainly learn that a red sock and white clothes shouldn’t be mixed, you probably did know that anyway, though. It won’t stop you making a similar mistake again.

Never make misteaks: Just never leave a red sock in your white wash. If only! Unfortunately everyone makes mistakes – that’s why we have erasers on pencils and a delete key on computers – this idea just won’t work.

Get used to pink clothes: Maybe, but it’s not ideal. It might not be so great turning up to school in a pink shirt.

What if the problem’s more serious?

We can probably live with pink clothes, but what happens if a similar mistake is made at a hospital? Not socks, but medicines. We know everyone makes mistakes so how do we stop those mistakes from harming patients? Special machines are used in hospitals to pump medicine directly into a patient’s arm, for example, and a nurse needs to tell it how much medicine to give – if the dose is wrong the patient won’t get better, and might even get worse.

What have we learned from our red sock strategies? We can’t stop giving patients medicine and we don’t want to get used to mistakes so our first and fourth strategies won’t work. We can give nurses more training but everyone makes mistakes even when trained, so the third suggestion isn’t good enough either and it doesn’t stop someone else making the same mistake.

We need to stop thinking of mistakes as a problem that people make and instead as a problem that systems thinking can solve. That way we can find solutions that work for everyone. One possibility is to check whether changes to the device might make mistakes less likely in the first place.

Errors? Or arrows?

Most medical machines are controlled with a panel with numbered keys (a number keypad) like on mobile phones, or up and down arrows (an arrow keypad) like you sometimes get on alarm clocks. CHI+MED researchers have been asking questions like: which way is best for entering numbers quickly, but also which is best for entering numbers accurately? They’ve been running experiments where people use different keypads, are timed and their mistakes are recorded. The researchers also track where people are looking while they use the keypads. Another approach has been to create mathematical descriptions of the different keypads and then mathematically explore how bad different errors might be.

It turns out that if you can see the numbers on a keypad in front of you it’s very easy to type them in quickly, though not always correctly! You need to check the display to see if you have actually put in the right ones. Worse, mistakes that are made are often massive – ten times too much or more. The arrow keypads are a little slower to use but because people are already looking at the display (to see what numbers are appearing) they can help nurses be more accurate, not only are fewer mistakes made but those that are made tend to be smaller.

Smart machines help users

A medical device that actively helps users avoid mistakes helps everyone using it (and the patients it’s being used on!). Changing the interface to reduce errors isn’t the only solution though. Modern machines have ‘intelligent drug libraries’ that contain information about the medicines and what sort of doses are likely and safe. Someone might still mistakenly tell the machine to give too high a dose but now it can catch the error and ask the nurse to double-check. That’s like having a washing machine that can spot bright socks in a white wash and that refuses to switch on till it has been removed.

Building machines with a better ability to catch errors (remember, we all make mistakes) and helping users to recover from them easily is much more reliable than trying to get rid of all possible errors by training people. It’s not about avoiding red socks, or errors, but about putting better systems in place to make sure that we find them before we press that big ‘Start’ button.

This story was originally published here and is an article from CS4FN, a free computer science magazine from Queen Mary University of London which is sent to subscribing UK schools. To find out more please visit our About page.

Further reading / watching
You can find a copy of this article on pages 4 and 5 in issue 17 (Machines Making Medicine Safer) of CS4FN 17.

From 50s in this Paddington 2 clip you can see a ‘real world’ example of a red sock getting into the laundry.

 

Gadgets based on works of fiction

Why might a computer scientist need to write fiction? To make sure she creates an app that people actually need.

Portrait images of lots of people used as personas.

Writing fiction doesn’t sound like the sort of skill a computer scientist might need. However, it’s part of my job at the moment. Working with expert rheumatologists Amy MacBrayne and Fran Humby, I am helping a design team understand what life with rheumatoid arthritis is like, so they can design software that is actually needed and so will be used and useful.

A big problem with developing software is that programmers tend to design things for themselves. However, programmers are not like the users of their software. They have different backgrounds and needs and they have been trained to think differently. Worse, they know the system they are developing inside out, unlike its users. An important first step in a project is to do background research to understand your users. If designing an app for people with rheumatoid arthritis, you need to know a lot about the lives of such people. To design a successful product, you particularly need to understand their unfulfilled goals. What do they want to be able to do that is currently hard or impossible?

What do you do with the research? Alan Cooper’s idea of ‘Personas’ are a powerful next step – and this is where writing fiction comes in. Based on research, you write descriptions of lots of fictional characters (personas), each representing groups of people with similar goals. They have names, photos and realistic lives. You also write scenarios about their lives that help understand their goals. Next, you merge and narrow these personas down, dropping some, creating new ones, altering others. Your aim is to eventually end up with just one, called a primary persona. The idea is that if you design for the primary persona, you will create something that meets the goals of the groups represented by the other personas it replaced.

The primary persona (let’s call her Samira) is then used throughout the design process as the person being designed for. If wondering whether some new feature or way of doing things is a good idea, the designers would ask themselves, “Would Samira actually want this? Would she be able to use it?” If they can think of her as a real person, it is much easier to make decisions than if thinking of some non-existent abstract “user” who becomes whatever each team member wants them to be. It helps stop ‘feature bloat’ where designers add in every great idea for a new feature they have but end up with a product so complex no one can, or wants to, use it.

As part of the Queen Mary PAMBAYESIAN project we have been talking to rheumatoid arthritis patients and their doctors to understand their needs and goals. I’ve then created a cast of detailed personas to represent the results. These can act as an initial set of personas to help future designers designing apps to support those with the disease.

If you thought creative writing wasn’t important to a computer scientist, think again. A good persona needs to be as powerfully written and as believable as a character in a good novel. So, you should practice writing fiction as well as writing programs.

Read some of our personas about living with rheumatoid arthritis here.

– Paul Curzon, Queen Mary University of London, Spring 2021

See the related Teaching London Computing Activity

Find out more about goal-directed design and personas from its creator in Alan Cooper’s wonderful book “The inmates are running the Asylum” (the inmates are computer scientists!)

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

How do you solve a problem like arthritis?

Some diseases can’t be cured. Doctors and nurses just try to control the disease to stop them ruining people’s lives. Perhaps smartphone apps can pull off the trick of giving patients better care while giving clinicians more time to spend with the patients who most need them? A Venn diagram is at the centre of the Queen Mary team’s prototype.

A Venn diagram of low participation, low empowerment and low independence with images linked to each - people eating in a resterount, a person holding out arms at the top of a peak and two people walking.

What is rheumatoid arthritis?

Normally your immune system does a good job of fighting infection and keeping you healthy. But, if you have an autoimmune disease, it can also attack your healthy cells, causing inflammation and damage. Rheumatoid arthritis is like this: a painful condition that mostly affects hands, knees and feet as the person’s immune system attacks their joints, making them swell painfully. It affects around 400,000 people in the UK and is more common in women than men.

People with the disease alternate between periods when it is under control and they have few symptoms, and with days or weeks of painful ‘flares’ where it is very, very bad. During these flares it especially affects a person’s ability to live a normal life. It can be hard to move around comfortably, do exercise – plus it interferes with their ability to work. It can also leave them totally reliant on family and friends just to do everyday things like dress or eat, never mind go out. This can lead to depression and puts a strain on friendships.

Treating the disease

Treatment, which can include tablets, injections, physiotherapy and sometimes surgery, slows the disease, keeping it under control for long periods. Sufferers are also given advice on lifestyle changes. This all reduces the risk of joint damage and helps people live their life more fully.

At appointments, doctors collect information to help them see how the disease is progressing. A Disease Activity Score (DAS) calculator lets them combine measurements for pain, how tender or swollen their patient’s joints are and how many joints are affected. Regular blood tests keep track of the amount of inflammation and how the body is reacting to drugs. This helps them decide if they need to adjust the medication.

If it is caught early, modern medicine reduces the worst effects of the disease, helped by keeping a close eye on the Disease Activity Score as treatments may need to be repeatedly adjusted to control flares. This requires regular hospital visits which uses up scarce healthcare resources and is very time-consuming for patients. It is hampered because hospital appointments may only happen twice a year due to the number of patients. Everyone wants to give more personalised care, but hospitals just can’t afford to provide it.

Supporting doctors

So, what do you do when there just aren’t enough doctors to see everyone as regularly as needed to maintain their patients’ wellbeing? One solution is to use remote monitoring with an app on a patient’s smartphone, so involving patients more directly in their own care. They can use such apps to regularly record their own disease activity measurements, sharing the information with their doctor to save visiting the hospital.

A smart app

This is an improvement, but the measurements still require expert monitoring and can take more of the doctor’s time. However, if smartphones can actually be made to be, well, smart, then they could help give advice between hospital visits and alert the hospital team, when needed, so they can step in. This might involve, for example, loading the app with background knowledge about rheumatoid arthritis, expert knowledge from lots of doctors, and creating an artificial intelligence to use this information effectively for each patient.

Hospital specialists and computer scientists at Queen Mary are developing such a prototype based on Bayesian networks as the artificial intelligence core. Bayesian networks are based on reasoning about the causes of things and how likely different things are to be the cause of something being observed. Building the prototype involves finding out if patients and clinicians find such tools useful and acceptable (some people might find clinic visits reassuring, while some may be keener to avoid taking the time off work, for example).

Smart and patient centred

This still focusses on a clinician’s view of treatment using drugs though. With a smartphone app we can perhaps do better and take the person’s life into account – but how? The first step is to understand patient goals. Patients would need to be willing to share lots of information about themselves so that the software can learn as much as possible about them. Eventually, this might be done using sensors that automatically detect information: how much pain they are in, how stiff their joints are, how much they move around, how long it takes them to get out of a chair, how much sleep they get, how often they meet others, if and when they take their medicine, and so on. Rather than just focussing on medical treatment it can then focus advice ‘holistically’ on the whole person.

The Queen Mary team’s approach is centred around three different things: helping people with physical independence so they can move around and look after themselves; empowering them to manage their condition and general well-being themselves; and participation in the sense of helping them socialise, keep friendships and maintain family bonds.

The Bayesian network processes the information about patients and computes their predicted levels of independence, empowerment and participation, working out how good or bad things are for them at the moment. This places them in one of seven positions in a Venn diagram of the three dimensions over which areas need most attention. It then gives appropriate advice, aiming to keep all three dimensions in balance, monitoring what happens, but also alerting the hospital when necessary.

So, for example, if the Bayesian network judges independence low, participation high and empowerment low, the patient is in the Venn diagram intersection of low empowerment and low independence. Advice in the following weeks, based on this area of the Venn diagram, would focus on things like coping with pain and stiffness, getting better sleep, as well as how to manage the disease in general.

By personalising advice and focusing on the whole person, it is hoped patients will get more appropriate care as soon as they need it, but doctors’ time will also be freed up to focus on the patients who most need their help.

– Jo Brodie, Hamit Soyel and Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

So, so tired…

Fatigue is a problem that people with a variety of long-term diseases can also suffer from.

A man, hands over face, very, very tired.
Image by Małgorzata Tomczak from Pixabay

This isn’t just normal tiredness, but something much, much worse: so bad that it is a struggle to do anything at all, destroying any chance of a normal life. Doctors can often do little to help beyond managing the underlying disease, then hope the fatigue sorts itself out. Sometimes fatigue can stay with the person long, long after. Maha Albarrak, for her PhD, is exploring how computer technology might help people cope. Her first step is to interview those suffering to find out what kind of help they really need. Then she will work closely with volunteers to come up with solutions that solve the problems that matter.

– Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

Here

A manequin in shadow, arms spread wide waiting to be dressed
Image by Zaccaria Boschetti from Pixabay

Amy Dowse wondered if an app might help people suffering with anxiety. One way to overcome panic attacks is a mindfulness technique where you focus on the here and now – your surroundings rather than your internal feelings. For her university MSc project, she created an app to help people do this, called Here. It prompts you to look for coloured objects in the real world then use them to build a picture in the app. For example, you look at the colour of the clothes that people around you are wearing and try to fully dress a figure on the app using what you see.

– Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

Smart bags

In our stress-filled world with ever increasing levels of anxiety, it would be nice if technology could sometimes reduce stress rather than just add to it. That is the problem that QMUL’s Christine Farion set out to solve for her PhD. She wanted to do something stylish too, so she created a new kind of bag: a smart bag.

Christine realised that one thing that causes anxiety for a lot of people is forgetting everyday things. It is very common for us to forget keys, train tickets, passports and other everyday things we need for the day. Sometimes it’s just irritating. At other times it can ruin the day. Even when we don’t forget things, we waste time unpacking and repacking bags to make sure we really do have the things we need. Of course, the moment we unpack a bag to check, we increase the chance that something won’t be put back!

Electronic bags

Christine wondered if a smart bag could help. Over the space of several years, she built ten different prototypes using basic electronic kits, allowing her to explore lots of options. Her basic design has coloured lights on the outside of the bag, and a small scanner inside. To use the bag, you attach electronic tags to the things you don’t want to forget. They are like the ones shops use to keep track of stock and prevent shoplifting. Some tags are embedded into things like key fobs, while others can be stuck directly on to an object. Then when you pack your bag, you scan the objects with the reader as you put them in, and the lights show you they are definitely there. The different coloured lights allow you to create clear links – natural mappings – between the lights and the objects. For her own bag, Christine linked the blue light to a blue key fob with her keys, and the yellow light to her yellow hayfever tablet box.

In the wild

One of the strongest things about her work was she tested her bags extensively ‘in the wild’. She gave them to people who used them as part of their normal everyday life, asking them to report to her what did and didn’t work about them. This all fed in to the designs for subsequent bags and allowed her to learn what really mattered to make this kind of bag work for the people using it. One of the key things she discovered was that the technology needed to be completely simple to use. If it wasn’t both obvious how to use and quick and simple to do it wouldn’t be used.

Christine also used the bags herself, keeping a detailed diary of incidents related to the bags and their design. This is called ‘autoethnography’. She even used one bag as her own main bag for a year and a half, building it completely into her life, fixing problems as they arose. She took it to work, shopping, to coffee shops … wherever she went.

Suspicious?

When she had shown people her prototype bags, one of the common worries was that the electronics would look suspicious and be a problem when travelling. She set out to find out, taking her bag on journeys around the country, on trains and even to airports, travelling overseas on several occasions. There were no problems at all.

Fashion matters

As a bag is a personal item we carry around with us, it becomes part of our identity. She found that appropriate styling is, therefore, essential in this kind of wearable technology. There is no point making a smart bag that doesn’t fit the look that people want to carry around. This is a problem with a lot of today’s medical technology, for example. Objects that help with medical conditions: like diabetic monitors or drug pumps and even things as simple and useful as hearing aids or glasses, while ‘solving’ a problem, can lead to stigma if they look ugly. Fashion on the other hand does the opposite. It is all about being cool. Christine showed that by combining design of the technology with an understanding of fashion, her bags were seen as cool. Rather than designing just a single functional smart bag, ideally you need a range of bags, if the idea is to work for everyone.

Now, why don’t I have my glasses with me?

– Paul Curzon, Queen Mary University of London, Autumn 2018

Download Issue 25 of the cs4fn magazine “Technology Worn Out (and about) on Wearable Computing here.