Emily Turnage
Gamification of Social Media
For this blog post, I’d like to talk about not one article, but a number of Ted Talks related to one central idea: designing with disability in mind. The talks range from blind people like Chieko Asakawa speaking on the innovations she’s helping to pioneer to help blind people live independently, to Neil Harbisson - a colorblind man who had a chip and camera implanted into his skull so that he can hear colors, providing him with a range of experiences unlike any other human has had. All these talks emphasized one thing in particular: that designing with disability in mind helps more than just disabled people. Though on its own that would be enough - to ensure that people with various disabilities are able to live and function independently of caretakers or helpers - in truth, designing for disability has a tendency to open up avenues to solutions for problems we weren’t even looking to solve.
Take, for instance, Neil Harbisson’s light-sensitive camera and chip implant. Not only does the chip allow him to hear the visible spectrum of light for us humans, but it allows him to see ultraviolet light, something normally invisible to humans. Ultraviolet light produced by the sun is what causes sunburns, and a life of overexposure can lead to various types of skin cancer. Because Harbisson’s implant allows him to hear ultraviolet light, it’s a reminder to not go without protection when the sun is out - or even when it’s cloudy, as UV light can penetrate through cloud cover. This seems like a silly example - we know to wear sunscreen already. But it’s a reminder that by solving problems for disabled people, we may find that we unlock helpful technology for all people. This is true of the telephone, which Chieko Asakawa tells us was invented while trying to create a communication aid for hearing impaired people. Asakawa reminds us that accessibility ignites innovation - that creating accessibility solutions for disabled people opens up paths we didn’t even know existed. That being said, I think it is not only an ethical decision, but an ethical imperative that we, as designers, pay heed to and attempt to design with disability in mind. From a utilitarian point of view - doing the most good for the most people - factoring in disability into design hurts few, if any, users, and has the potential to drastically improve some user experiences. It also, as mentioned previously, has the potential to open up avenues of exploration and innovation that have the potential to benefit people beyond just those we have in mind when designing for disability. Certain keyboards, in fact, were created to help people with disabilities as well, Asakawa says, and most of us take the existence of keyboards for granted every single day. As designers - no matter what field we are poised to enter once we leave college - we have the largest say in how people will interface with the things we create. By designing not just for the able-bodied, able-minded people we tend to regard as “normal” - though almost 20% of Americans have a disability, rendering that point utterly moot - but for everyone, whether it’s making text available and easy for screen-readers to use, to closed captioning and alternatives for otherwise audio-only experiences, just as a small example. As designers, it’s our duty to think about and engage with everyone - not just with whom it’s most convenient for us to cater to.
12 Comments
In this blog post, I’d like to discuss a phenomenon detailed in this article published on BBC News - specifically, the phenomenon of sexism permeating the world of artificial intelligence. Of course, with the likes of Cortana and Alexa and Siri growing more and more commonplace, it has become natural to think of chatbots as feminine creations. This phenomenon, the article details, is concerning because of the implications that chatbots - a service, a robot devoid of gender designed to serve, to obey the user’s commands - are inherently feminine. It’s true that sexism still abides in the tech industry, with the BBC article estimating 30% of the technology workforce being women. But why is this a problem? Why is the apparent willing absence of women in the industry a bad thing for AI?
For one, it follows that prejudices from the creators of said chatbots could work their way into the chatbots themselves; the idealization of a demure, feminine servant is problematic, as previously stated, and is the result of a default way of thinking that men have grown accustomed to. While not consciously a choice made because “women deserve to be servants”, it’s the normalization of this subconscious line of thought that is concerning. Without women - or, indeed, more progressive men - in the workforce combatting this line of thought, sexualized “fembots” as the BBC article puts it, could become more and more mainstream. For another, the absence of women in the industry can further demoralize women looking to get into the industry. It really says something to be able to look at a company and see representation of yourself within it; this is why movements to bring more people of color and women into the tech workforce have begun in recent years - because for years, it has been a primarily white, male dominated field. But when a company puts out a demure, feminine chatbot made primarily by men, it can be demoralizing for a potential female applicant; if, however subconsciously, that’s the ideal woman, then what place does she have? So what should be done about it? It’s not as easy as simply putting more women into the workforce - though a good goal to have regardless, it’s not the be-all end-all solution to the sexism that pervades the field of artificial intelligence. Rather, it’s changing the perception of chatbots to something less human that will, perhaps, solve some of the issue. It’s our job, as designers and developers, to ensure our biases (viewing chatbots as inherently feminine, for example) don’t influence the products we make - and that begins, in this case, with our perceptions of chatbots as a whole. By removing the human, gendered component - a gender neutral voice, with no name other than perhaps the device name - a chatbot could be exactly that. Just a chatbot with a purpose to assist in tasks, no feminization required. And, according to the article, some already have; one financial bot in particular, does not have a gendered voice, and is quick to divert playful or sexual banter back to the focus of the bot, rather than the perhaps “playful” responses from more well-known bots. In the past few blog posts, I’ve gone over privacy, and how the government decides what companies are allowed to store about us - and the personal implications those decision can have on users. This week’s blog post is in the same vein, as I’m going to discuss information storage and a privacy issue, but in a completely different light - outlined by this article on CNN that details how a court is trying to get Amazon to give them the records from a defendant’s Echo device, which works by constantly listening in for the right words to wake it - in the Echo’s case, the name “Alexa”.
In the article, they talk about the sound in the room actually being recorded, stored and processed, to be deleted at “a later date”. This is frightening, considering the sort of personal, private conversations that can happen in a home that may not be happening with as much frequency as online, but perhaps the people purchasing a voice-activated in home device would be aware of this most salient point before buying such a device. It’s what is required for the device to run, but should that information be distributed - at least to courts, when formally requested - by Amazon? It’s hard to defend any other viewpoint than ‘yes’. Though it could be thought of as an invasion of privacy, other things similar - cell phones, computers, et cetera - are seized by courts fairly often as part of cases; it makes sense that devices like Google Home or the Amazon Echo would follow in those footsteps. Amazon, however, is pushing back, saying they will not release the information and that “devices like the Echo… shouldn’t be used against you”. When accused of a crime, I am of the opinion that anything and everything that could be used for or against you can, and should, be utilized in order to paint out the entire picture for the jurors. That Amazon is concerned over whether their product is used against its users comes down to poor publicity - people hearing about the Echo being used in court cases won’t lend itself well to already-there concerns over privacy while using the device. |
AuthorI am a senior studying Communication Design, with an emphasis in Game Design. I like playing video games, writing, and yelling too loudly about things I care about. Archives
May 2017
Categories |