Emily Turnage
Gamification of Social Media
As last week I looked at the ethics of allowing AIs to make life or death situations, this week I will examine another facet of military AI: the idea of AI swarm drones, as shown in this article on CBS news - the ones being discussed in the article are called Perdix, and can be deployed in huge groups to do any number of things within dangerous territory. And because they're so small, the article describes their expendable nature as "no great loss if it crashes into the ground". The article goes on to describe some of the uses of these swarm drones - namely, reconnaissance and searching for targets in a specified area. The drones, if on an actual mission, would have been hooked up to weapons systems and been able to tell them where and when to fire.
This touches on the same note that I spoke on last week - drones making decisions, collectively, about who to kill and how and when. These drones make it even more complex by their expendability - allowing people to employ them to do tasks that would have never been humanly possible due to high mortality risks. These 'swarm drones' would revolutionize warfare, and perhaps not in a good way. How would we fight wars if we had nothing to lose? If we only risked material wealth - something easily replaceable - instead of factoring real, human lives into the equation? How do you deal with the fallout from a mistake on the part of one of the drones? These questions are ones that must be answered by both the companies developing these technologies, and the military hierarchies that plan on using them. By changing the game of war in such a drastic way as to not have to risk a single human life in order to make strikes on enemy territory, does that make those with the best technology ultimately the sole authors of our future history? Technology used to subdue, and control, and substitute for real human lives is questionable at best, and prone to ethical nightmares at worst. As far as I am personally concerned, the idea of these swarm drones carrying out military operations completely autonomously is a chilling thought - that no human thought other than what makes it into these machines' programming is being put into the weighing of human lives. Though the use of these drones could improve military tactics, their autonomous programming is too fallible to ever have them be fully AI programmed. If we are to use this technology regularly, I would personally prefer at least have some human behind it, ultimately overseeing its actions and overriding them, if necessary.
19 Comments
In this article by KQED Learning, the topic of AI in military organizations was discussed. Truly, I hadn’t put a whole lot of thought into it as a topic, but having heard about it offhandedly through classmates in Ethics, I decided I’d take a look - and I was surprised at what I found. I knew drones were widely used in operations throughout the Middle East, but not that their piloting could be - and has been, in some cases - fully automated, and that they are made to make decisions about the use of lethal force. This, even though it’s happening in countries that I will likely never visit, affecting people I will never even come close to interacting, is still baffling to me.
I understand that the US is engaged in many militarized efforts. I understand that there may not be enough manpower - or that it might be too dangerous for said manpower - to do all of the things that these drones are doing. But when a machine is the one to decide whether someone should live or die, that’s a terrifying thought. As outlined in the paper, drones may be liable to make mistakes that people do not based on imperfect recognition software, and a machine’s algorithm botching a decision seems, for some reason, much worse to me than a human botching that same decision. A human will learn from that mistake; there are only so many improvements that can be made to technology, and at least in our lifetimes I don’t think that it will ever be perfected. By shunting these decisions onto machines, we lose the fairness of a human making judgement calls that a machine will never be able to make. That’s not to say AI in the military does not have its benefits - it allows us to enact many more missions than we may have been able to do with just soldiers in the field, and AI systems can, as the article outlines, perform many tasks without the same fatigue that humans experience. Robots don’t disobey orders - or, at least, that hasn’t been experienced yet, though science fiction leads us to believe this is inevitable. And robots can be made much tougher than humans. So perhaps machines are best left to those tasks - of de-mining fields or patrolling a given area, rather than having to make decisions about whether or not to kill groups of insurgents that may not even be armed. There is something to be said for automation of certain tasks - the potential benefits to our military are great - but the decision of human-killing is something that I personally believe should not be automated. |
AuthorI am a senior studying Communication Design, with an emphasis in Game Design. I like playing video games, writing, and yelling too loudly about things I care about. Archives
May 2017
Categories |