Emily Turnage
Gamification of Social Media
In this article by KQED Learning, the topic of AI in military organizations was discussed. Truly, I hadn’t put a whole lot of thought into it as a topic, but having heard about it offhandedly through classmates in Ethics, I decided I’d take a look - and I was surprised at what I found. I knew drones were widely used in operations throughout the Middle East, but not that their piloting could be - and has been, in some cases - fully automated, and that they are made to make decisions about the use of lethal force. This, even though it’s happening in countries that I will likely never visit, affecting people I will never even come close to interacting, is still baffling to me.
I understand that the US is engaged in many militarized efforts. I understand that there may not be enough manpower - or that it might be too dangerous for said manpower - to do all of the things that these drones are doing. But when a machine is the one to decide whether someone should live or die, that’s a terrifying thought. As outlined in the paper, drones may be liable to make mistakes that people do not based on imperfect recognition software, and a machine’s algorithm botching a decision seems, for some reason, much worse to me than a human botching that same decision. A human will learn from that mistake; there are only so many improvements that can be made to technology, and at least in our lifetimes I don’t think that it will ever be perfected. By shunting these decisions onto machines, we lose the fairness of a human making judgement calls that a machine will never be able to make. That’s not to say AI in the military does not have its benefits - it allows us to enact many more missions than we may have been able to do with just soldiers in the field, and AI systems can, as the article outlines, perform many tasks without the same fatigue that humans experience. Robots don’t disobey orders - or, at least, that hasn’t been experienced yet, though science fiction leads us to believe this is inevitable. And robots can be made much tougher than humans. So perhaps machines are best left to those tasks - of de-mining fields or patrolling a given area, rather than having to make decisions about whether or not to kill groups of insurgents that may not even be armed. There is something to be said for automation of certain tasks - the potential benefits to our military are great - but the decision of human-killing is something that I personally believe should not be automated.
13 Comments
Noemi Cuin
5/7/2017 01:33:38 pm
hi emily,
Reply
James Barquera
5/20/2017 04:58:03 pm
It seems that is the way society is going. I just read an article recently about how Russia has developed a robot that can shoot a gun. It seems as if our further advancements in technology can result in our downfall.
Reply
Jose Cortez
5/20/2017 10:44:36 pm
I think that machines that can make their own decisions will bring a new age for human kind. I'd like to imagine that it would lead to something along the lines of what the Wall-E movie spaceship was like.
Reply
Laura Chavez
5/8/2017 02:08:36 pm
Hi Emily,
Reply
Jose Cortez
5/20/2017 10:46:14 pm
If you have already given a machine the command to kill someone isin't it best that a machine with no feelings does it instead of another human? This is the kind of stuff that causes PTSD for our troops.
Reply
5/8/2017 05:41:19 pm
I understand that there is a sense of unfairness when an automated drone or weapon takes away human judgement from deciding whether or not to kill target. However in warfare and military, there is not much fairness. It is all about winning. I am not saying this is ethical by any means, but the government would not adopt a weapon that is too ethical. In my opinion, these automated weapons that make decisions will benefit the military troops very much. Many troops come back from war very traumatized and with a huge possibility of having PTSD from all the killing they have witnessed or even did themselves. These automated weapons will relieve troops from having to make these strenuous decisions in the battlefield and reduce the amount of mental illnesses in military. I think in this situation, we need to think about the people closest to us as Americans, being our military.
Reply
Cristina Cachux
5/10/2017 08:03:26 pm
Hello Emily,
Reply
Stephen Negron
5/11/2017 05:15:39 pm
Hello Emily,
Reply
Angela
5/11/2017 08:26:46 pm
Hi Emily, I completely agree with you when you say that even though it would be a mistake, it would be better for a human to make the mistake than a machine because then there's a chance for correction. The world is becoming so automated nowadays but we need to draw a line when it comes to human lives that are at stake because of it.
Reply
Fernando Madrigal
5/12/2017 04:44:29 pm
I enjoyed reading your blog post and I agree that its better to make a decision between life and death when it comes to military mission decisions instead of drones or any other robotic military devices. Machines do not have emotions and are only programmed to get the job done.
Reply
Hugo Argueta
5/17/2017 11:29:18 am
Hey Emily,
Reply
Alan Garcia
5/18/2017 02:57:09 pm
25
Reply
5/20/2017 11:20:08 pm
Stephen Hawking has warned about weaponizing AI. The main scare being the singularity, we would be giving AI the tools needed to enslave us, that might sound like an extreme but not that far off. However if we were to use automated bots to fight out wars, then the human cost would go down however wars just become one of battling resources.
Reply
Leave a Reply. |
AuthorI am a senior studying Communication Design, with an emphasis in Game Design. I like playing video games, writing, and yelling too loudly about things I care about. Archives
May 2017
Categories |