Emily Turnage
Gamification of Social Media
As last week I looked at the ethics of allowing AIs to make life or death situations, this week I will examine another facet of military AI: the idea of AI swarm drones, as shown in this article on CBS news - the ones being discussed in the article are called Perdix, and can be deployed in huge groups to do any number of things within dangerous territory. And because they're so small, the article describes their expendable nature as "no great loss if it crashes into the ground". The article goes on to describe some of the uses of these swarm drones - namely, reconnaissance and searching for targets in a specified area. The drones, if on an actual mission, would have been hooked up to weapons systems and been able to tell them where and when to fire.
This touches on the same note that I spoke on last week - drones making decisions, collectively, about who to kill and how and when. These drones make it even more complex by their expendability - allowing people to employ them to do tasks that would have never been humanly possible due to high mortality risks. These 'swarm drones' would revolutionize warfare, and perhaps not in a good way. How would we fight wars if we had nothing to lose? If we only risked material wealth - something easily replaceable - instead of factoring real, human lives into the equation? How do you deal with the fallout from a mistake on the part of one of the drones? These questions are ones that must be answered by both the companies developing these technologies, and the military hierarchies that plan on using them. By changing the game of war in such a drastic way as to not have to risk a single human life in order to make strikes on enemy territory, does that make those with the best technology ultimately the sole authors of our future history? Technology used to subdue, and control, and substitute for real human lives is questionable at best, and prone to ethical nightmares at worst. As far as I am personally concerned, the idea of these swarm drones carrying out military operations completely autonomously is a chilling thought - that no human thought other than what makes it into these machines' programming is being put into the weighing of human lives. Though the use of these drones could improve military tactics, their autonomous programming is too fallible to ever have them be fully AI programmed. If we are to use this technology regularly, I would personally prefer at least have some human behind it, ultimately overseeing its actions and overriding them, if necessary.
19 Comments
In this article by KQED Learning, the topic of AI in military organizations was discussed. Truly, I hadn’t put a whole lot of thought into it as a topic, but having heard about it offhandedly through classmates in Ethics, I decided I’d take a look - and I was surprised at what I found. I knew drones were widely used in operations throughout the Middle East, but not that their piloting could be - and has been, in some cases - fully automated, and that they are made to make decisions about the use of lethal force. This, even though it’s happening in countries that I will likely never visit, affecting people I will never even come close to interacting, is still baffling to me.
I understand that the US is engaged in many militarized efforts. I understand that there may not be enough manpower - or that it might be too dangerous for said manpower - to do all of the things that these drones are doing. But when a machine is the one to decide whether someone should live or die, that’s a terrifying thought. As outlined in the paper, drones may be liable to make mistakes that people do not based on imperfect recognition software, and a machine’s algorithm botching a decision seems, for some reason, much worse to me than a human botching that same decision. A human will learn from that mistake; there are only so many improvements that can be made to technology, and at least in our lifetimes I don’t think that it will ever be perfected. By shunting these decisions onto machines, we lose the fairness of a human making judgement calls that a machine will never be able to make. That’s not to say AI in the military does not have its benefits - it allows us to enact many more missions than we may have been able to do with just soldiers in the field, and AI systems can, as the article outlines, perform many tasks without the same fatigue that humans experience. Robots don’t disobey orders - or, at least, that hasn’t been experienced yet, though science fiction leads us to believe this is inevitable. And robots can be made much tougher than humans. So perhaps machines are best left to those tasks - of de-mining fields or patrolling a given area, rather than having to make decisions about whether or not to kill groups of insurgents that may not even be armed. There is something to be said for automation of certain tasks - the potential benefits to our military are great - but the decision of human-killing is something that I personally believe should not be automated. For this blog post, I’d like to talk about not one article, but a number of Ted Talks related to one central idea: designing with disability in mind. The talks range from blind people like Chieko Asakawa speaking on the innovations she’s helping to pioneer to help blind people live independently, to Neil Harbisson - a colorblind man who had a chip and camera implanted into his skull so that he can hear colors, providing him with a range of experiences unlike any other human has had. All these talks emphasized one thing in particular: that designing with disability in mind helps more than just disabled people. Though on its own that would be enough - to ensure that people with various disabilities are able to live and function independently of caretakers or helpers - in truth, designing for disability has a tendency to open up avenues to solutions for problems we weren’t even looking to solve.
Take, for instance, Neil Harbisson’s light-sensitive camera and chip implant. Not only does the chip allow him to hear the visible spectrum of light for us humans, but it allows him to see ultraviolet light, something normally invisible to humans. Ultraviolet light produced by the sun is what causes sunburns, and a life of overexposure can lead to various types of skin cancer. Because Harbisson’s implant allows him to hear ultraviolet light, it’s a reminder to not go without protection when the sun is out - or even when it’s cloudy, as UV light can penetrate through cloud cover. This seems like a silly example - we know to wear sunscreen already. But it’s a reminder that by solving problems for disabled people, we may find that we unlock helpful technology for all people. This is true of the telephone, which Chieko Asakawa tells us was invented while trying to create a communication aid for hearing impaired people. Asakawa reminds us that accessibility ignites innovation - that creating accessibility solutions for disabled people opens up paths we didn’t even know existed. That being said, I think it is not only an ethical decision, but an ethical imperative that we, as designers, pay heed to and attempt to design with disability in mind. From a utilitarian point of view - doing the most good for the most people - factoring in disability into design hurts few, if any, users, and has the potential to drastically improve some user experiences. It also, as mentioned previously, has the potential to open up avenues of exploration and innovation that have the potential to benefit people beyond just those we have in mind when designing for disability. Certain keyboards, in fact, were created to help people with disabilities as well, Asakawa says, and most of us take the existence of keyboards for granted every single day. As designers - no matter what field we are poised to enter once we leave college - we have the largest say in how people will interface with the things we create. By designing not just for the able-bodied, able-minded people we tend to regard as “normal” - though almost 20% of Americans have a disability, rendering that point utterly moot - but for everyone, whether it’s making text available and easy for screen-readers to use, to closed captioning and alternatives for otherwise audio-only experiences, just as a small example. As designers, it’s our duty to think about and engage with everyone - not just with whom it’s most convenient for us to cater to. In this blog post, I’d like to discuss a phenomenon detailed in this article published on BBC News - specifically, the phenomenon of sexism permeating the world of artificial intelligence. Of course, with the likes of Cortana and Alexa and Siri growing more and more commonplace, it has become natural to think of chatbots as feminine creations. This phenomenon, the article details, is concerning because of the implications that chatbots - a service, a robot devoid of gender designed to serve, to obey the user’s commands - are inherently feminine. It’s true that sexism still abides in the tech industry, with the BBC article estimating 30% of the technology workforce being women. But why is this a problem? Why is the apparent willing absence of women in the industry a bad thing for AI?
For one, it follows that prejudices from the creators of said chatbots could work their way into the chatbots themselves; the idealization of a demure, feminine servant is problematic, as previously stated, and is the result of a default way of thinking that men have grown accustomed to. While not consciously a choice made because “women deserve to be servants”, it’s the normalization of this subconscious line of thought that is concerning. Without women - or, indeed, more progressive men - in the workforce combatting this line of thought, sexualized “fembots” as the BBC article puts it, could become more and more mainstream. For another, the absence of women in the industry can further demoralize women looking to get into the industry. It really says something to be able to look at a company and see representation of yourself within it; this is why movements to bring more people of color and women into the tech workforce have begun in recent years - because for years, it has been a primarily white, male dominated field. But when a company puts out a demure, feminine chatbot made primarily by men, it can be demoralizing for a potential female applicant; if, however subconsciously, that’s the ideal woman, then what place does she have? So what should be done about it? It’s not as easy as simply putting more women into the workforce - though a good goal to have regardless, it’s not the be-all end-all solution to the sexism that pervades the field of artificial intelligence. Rather, it’s changing the perception of chatbots to something less human that will, perhaps, solve some of the issue. It’s our job, as designers and developers, to ensure our biases (viewing chatbots as inherently feminine, for example) don’t influence the products we make - and that begins, in this case, with our perceptions of chatbots as a whole. By removing the human, gendered component - a gender neutral voice, with no name other than perhaps the device name - a chatbot could be exactly that. Just a chatbot with a purpose to assist in tasks, no feminization required. And, according to the article, some already have; one financial bot in particular, does not have a gendered voice, and is quick to divert playful or sexual banter back to the focus of the bot, rather than the perhaps “playful” responses from more well-known bots. In the past few blog posts, I’ve gone over privacy, and how the government decides what companies are allowed to store about us - and the personal implications those decision can have on users. This week’s blog post is in the same vein, as I’m going to discuss information storage and a privacy issue, but in a completely different light - outlined by this article on CNN that details how a court is trying to get Amazon to give them the records from a defendant’s Echo device, which works by constantly listening in for the right words to wake it - in the Echo’s case, the name “Alexa”.
In the article, they talk about the sound in the room actually being recorded, stored and processed, to be deleted at “a later date”. This is frightening, considering the sort of personal, private conversations that can happen in a home that may not be happening with as much frequency as online, but perhaps the people purchasing a voice-activated in home device would be aware of this most salient point before buying such a device. It’s what is required for the device to run, but should that information be distributed - at least to courts, when formally requested - by Amazon? It’s hard to defend any other viewpoint than ‘yes’. Though it could be thought of as an invasion of privacy, other things similar - cell phones, computers, et cetera - are seized by courts fairly often as part of cases; it makes sense that devices like Google Home or the Amazon Echo would follow in those footsteps. Amazon, however, is pushing back, saying they will not release the information and that “devices like the Echo… shouldn’t be used against you”. When accused of a crime, I am of the opinion that anything and everything that could be used for or against you can, and should, be utilized in order to paint out the entire picture for the jurors. That Amazon is concerned over whether their product is used against its users comes down to poor publicity - people hearing about the Echo being used in court cases won’t lend itself well to already-there concerns over privacy while using the device. For this week’s blog post, I can’t seem to find a better or more appropriate topic than that which has been the headline of nearly every major internet news outlet I follow - the recent vote by Congress to “keep a set of Internet privacy protections approved in October from taking effect later this year.” This will have the far-reaching effect of making it easier for ISPs to gather and store personal information of their users - browsing histories, app usage information, and more - in order to sell it to the highest bidder, most namely advertisers looking to more directly target consumers.
As I discussed in a previous blog post, the idea that these companies are saving information about us can be terrifying; though our names might not be attached to that information, enough identifying information is there that could pinpoint us, and that’s not going to sit well with many people - hence the outcry against the ruling. However, as I discussed in a previous blog post, the way that advertisers are using this information is, by and large, not out-and-out malicious. It’s simply a way to formulate better ad targeting, which is something many people may find useful, even if it can be creepy - for instance, my mother emailed me our itinerary for a trip to Las Vegas about a month and a half ago, I googled the places she mentioned, and then suddenly all of my ads across many networks - Google, Tumblr, Facebook - were all Las Vegas-themed, encouraging me to buy Cirque tickets or stay at New York, New York even after the trip had concluded. Still, the fact that Congress is unwilling to rein in ISPs making yet another cash grab at the expense of their users’ privacy - because indeed, that’s what the whole selling user information is - is worrying to say the least. Though buying personal information of specific individuals has been publicly ruled out by ISPs themselves - though that’s not to say what they would do behind closed doors - people are still funding campaigns through sites like GoFundMe and even new nonprofits set to donate to funds like the ACLU in order to both challenge the legislation, and in some cases actually attempt to purchase the browsing history of certain congresspeople who voted to halt the regulations. Though it’s an enjoyable - and ironic - sentiment, the true battle will be waged by utilizing tools created to protect one’s privacy such as VPNs, browsers like Tor, and ensuring that the sites that you do use have HTTPS enabled - a more secure way of transmitting your information to the site in question, without ISPs being able to see as much of that information. For this week’s blog post, I’d like to dive into a little more detail about Sesame Credit, China’s new “social credit rating” designed to reward people it considers good citizens. Sesame Credit analyzes everything about a person’s social life - from the purchases they make through Alibaba, to the posts and links they share on social media, even going as far as to factor in a user’s friends’ scores based on all of their social media presence. Being a good citizen gets one benefits, while being a poor citizen (using the word “democracy” repeatedly, for example, or sharing information on Tiananmen Square) will eventually reap negative consequences, such as slower internet speeds. This sort of social monitoring is exactly the sort of terrifying, all-encompassing Big Brother movement that we’ve been terrified of - and especially so because of how Sesame Credit goes about its suppression of opposition.
See, the terrifying beauty of Sesame Credit lies in its ability to monitor your friends’ activities and attribute them to your own. By not being a “good, obedient citizen” in China, one risks being ostracized and made a pariah by people they called friends, simply because being friends with them could cause their social credit score to go down, and prevent them from getting certain benefits, like getting government paperwork signed off faster, or even getting a discount at a hotel, as one couple spoke about in a BBC article on Sesame Credit. By weaponizing gamification and using it against disobedient citizens, Alibaba and Sesame Credit have turned everyone in China into Big Brother, by selectively rewarding them for cutting association with those known to speak out against the Chinese government’s regime. And it horrifies me to think that companies like Yahoo are partnered with Alibaba, in effect supporting the giant company’s totalitarian regime over the social lives of its citizens - because make no mistake, though Sesame Credit is optional now, the plan is for it to be mandatory by 2020 for all Chinese citizens. That we can just sit by and allow this sort of monitoring is awful; but, because Alibaba is Chinese owned and maintained, there is little the US can do besides encouraging its own companies to drop their partnerships with it, creating a clear stance that the US will not stand for these awful human rights violations - but then, China seems to be no stranger to violating its country’s citizens’ rights. For this week’s blog, I’m going to get into the good and the bad of Uber, a company that’s rapidly replacing taxis and public transport in metropolitan areas, and is even gaining ground outside of them. Uber seems to be the target of a lot of controversy as of late - with their breaking of the taxi strike in New York after the travel ban was put in place, to accusations of racism and sexism by its drivers.
First, the bad - in an article on Alternet.org, a study details that “Uber drivers in Boston have a pattern of prejudice against black and female riders.” The study goes on to show that the drivers are twice as likely to cancel a ride if the customer has a black-sounding name, and likely to drive women on longer, more expensive routes as well. This sort of discrimination, though it is human in nature and not completely incorrigible, should absolutely be given a zero-tolerance policy by the company. If a driver is known to cancel rides fairly often, and those rides have a common element - requested by women and people with nonwhite sounding names, then that driver should be punished appropriately, with either suspension or loss of their Uber driving privileges. The same goes for routes taken, although this is less easy to track as cancelled rides. In order for Uber to maintain its station as a distinctly modern service, it needs to eschew the prejudices that plague it currently and make a concerted effort towards equal treatment and support of their userbase. That being said, not all Uber drivers are at fault; a story on the SFGate site details “How an Uber driver stopped child sex trafficking in Elk Grove”. The Uber driver called police when a young girl - estimated to be twelve years of age by the driver - was being groomed for sex work by two older women also present in the car. Though Uber does not train its drivers in dealing with these sort of situations, it was a heroic move by the driver and his actions have been commended by the company; it is stories like this that Uber must set its precedent by, and addressing the claims of racism and sexism mentioned previously would do a great deal towards maintaining this standard. In an article “Tech companies like Gmail, WhatsApp may be asked to store user information” for The Economic Times of India, Surabhi Agarwal describes a new movement toward data collection proposed by the Indian government. Under these new rules, email services and communication apps - among others - would be required to store data on their users, such as messages sent or received, or even personal information, for a set period of time. Of course, this movement has been met with opposition from these companies, citing difficult implementation and invasions of users’ privacy as primary reasons why many are speaking out against it.
Despite this sudden outcry, many data collection companies have already begun doing so. Acxiom in particular is an data marketing firm that offers information - including browsing habits, and ways of identifying what type of consumer a user is, from “potential inheritor”, “adult with senior parent” and “diabetic focused”. It then sells this information to companies for a profit, sometimes without the knowledge or consent of the users whose information it is selling. It’s easy to think of all of this as horrifying - a blight on our privacy, that some company is, in the words of Vienna Teng, “gathering every crumb you drop / these mindless decisions and moments you long forgot”. To be sure, the data collection and storage already done by Acxiom, and now proposed by the Indian government on the parts of messaging apps, is worrisome if the consumer has not given consent for their information to be stored in such a way. But - since it’s already happening, and comparatively little can be done about the information that’s already out there, perhaps it’s also important to look at the benefits data collection may have for us. For example, in an article for Business Insider, the vice president of Macy’s customer strategy division describes responsible data usage as such: "Consumers are worried about our use of data, but they're pissed if I don't deliver relevance. … How am I supposed to deliver relevance and magically deliver what they want if I don't look at the data?" Being able to examine consumer data in order to provide relevance in advertising and products is a convenience that many consumers may take for granted when lobbying for increased privacy. It’s a delicate balance that needs to be struck between collecting enough data in order to give customers the products that they want, and advertising that will interest them, but also not infringing on customers’ personal rights to privacy. For today’s blog post, I’ve decided to delve into that subject that seems to excite most people as soon as they hear the words “Artificial Intelligence” – self-driving cars. They’re a marvel, to be sure, with many futuristic depictions of the world involving autonomous vehicles, along with many other ‘far-fetched’ conveniences of the future. By all accounts, they’re an exciting invention. But there’s an important moral conundrum that they bring to light with increasing regularity: that of the trolley problem, but intensified. Autonomous vehicles present the conundrum of saving oneself versus saving passerby, and this question is what concerns people most when posed with the idea of riding in one.
How do companies, as a whole, decide who to kill, if the situation arises? It’s a dark thought, but one that AV companies have grown very familiar with. Many are choosing, as Business Insider states, to simply dodge the question. Mercedes reportedly told Car and Driver that their vehicles would prioritize passenger lives, but later recanted that statement and gave one decidedly more neutral – that “neither programmers nor automated systems are entitled to weigh the value of human lives.” This was based on the idea that, in a crash, the vehicle should protect those it can most directly protect, the ones it can best save – the people inside. But this has the potential much more disquieting to the population in general, that people in cars take precedence over themselves. As someone interested in owning an automated vehicle once they become accessible to the general public – I hate driving, though I have to do so regularly to see family - the thought that my car favors my own life would certainly make getting one more appealing. And, while these questions are important ones, very rarely would they ever come into play, a relief considering many of the other aspects of driverless cars make being on the road – even around other human drivers – much safer. There are other moral quandaries that arise when the subject of automated vehicles is brought up – namely, the livelihoods of those who drive for a living. Taxi, Uber or Lyft drivers, delivery trucks, and many other industries rely on humans to drive cars – but that’s a field of work that could dry up fairly quickly once driverless cars become mainstream and accessible by people and companies alike. It’s the same concern that arises with robots taking over human jobs in other fields, and one that’s fast approaching as Google’s cars grow more and more advanced. Perhaps the employment of watchmen – people who don’t drive, but navigate the cars for passengers, and know the ins and outs of the car’s systems – would see an uptick. It’s not something there’s an easy answer to, unfortunately. |
AuthorI am a senior studying Communication Design, with an emphasis in Game Design. I like playing video games, writing, and yelling too loudly about things I care about. Archives
May 2017
Categories |