Today for my blog I chose to write about the article Fake News is About to Get Even Scarier Than You Ever Dreamed. The article also reflects on my project topic of Fake News in the Media. In the article it addresses two new programs which would allow some pretty terrifying results. This first of these programs is a Real-Time Face Capture and Reenactment which allows a person to sit in front of a webcam and in realtime edit the facial movements of a person being filmed. The next software is a new Audio manipulation software being developed by Adobe that would allow a user to feed in a set of phonics and have their computer be able to mimic that person's voice. These two programs used in tandem would allow for anyone to quickly and easily edit a video to make it look as if the person recorded was saying anything they wanted. Suddenly the president admits he is a Russian spy, or Obama admits to being a Lizardman (Yes there are people who think that.) The point is that soon we will not be able to believe anything that anyone says on TV or in a video. I am unsure why these programs were even maid. I can see potential uses of the audio manipulator but the reeltime facial manipulation software in particular I cannot see what use other than the spread of false information. With fake news being such a real issue right now I cannot fathom why these two programs would be developed at this time. They would just become more tools for the spread of misinformation. The entire presentation of the fascial manipulation software was the developers manipulating live news footage, as if they are not even denying what this software will be used for. Fake news is dangerous enough as is without giving those that create these fake news stories an even easier time of deceiving people. The scariest part of all this is how clean the manipulation looks. If the viewer was unaware of the manipulation going on, there would be little indicator to the untrained eye that anything would be amiss. Once this software becomes commonplace, even actual unedited footage will become the subject of debate. “Did he actually say that or was it edited to look like he did?” This not only helps spread misinformation but also makes it harder to know what is even real anymore.
http://www.vanityfair.com/news/2017/01/fake-news-technology
1 Comment
For today’s blog I plan on writing my thoughts about How Much Lying is Acceptable Online. Though the article was more in reference to online dating then online in general I still thought I would leave my thoughts on the matter. In my opinion you should not lie online. On a site that requires you to give your name it is fine to give an alias but on a dating site or when writing “facts” on wikipedia or another site there is no reason to do so. In the case of the dating website the lies may get you that first date but when found out will probably ruin any chance there may have been. When posting false facts online, there are a million other sources that contradict the false source. There is also the issue that the false facts might breed confusion and the spread of misinformation that causes social divides and other issues down the road. Though I hate to make the obvious political jab, I would be lying if I said it was not the reason I opened the article in the first place. The reason a lie should not be told, especially online is how easy it is to fact check. For example the “Bowling Green Massacre” a reported terrorist attack sited by Kellyanne Conway, the councilor of the president in order to defend his travel ban. The trick though is that the alleged Massacre disproven with a quick google check. There is no reason to lie so blatantly to people online. People are not stupid and the majority of them will check any information given to them before believing it and when you are caught in a lie it is you who looks like an idiot or worse makes it look like you think the people you are talking to are idiots. You can not trust people with big issues if they can not be trusted with small ones. Lying online only hurts your own credibility and does more harm than good. A bit of a shorter blog post today, but I feel this issue is pretty open and shut. If the information you are filling out is not information you are comfortable giving and is only for your own use or the use of a corporation or website than an alias is fine, but if the information is being used to potentially meat a future love interest it is better to be truthful. The lie might get you in the door but just as easily get it slammed in your face soon after.
http://www.evanmarckatz.com/blog/online-dating-tips-advice/how-much-lying-is-acceptable-online-2/ Today I chose to write about Self Driving cars and my stance on their use. I bring this up do to the article, Uber’s Self-driving Cars start picking up passengers in San Francisco which was in the list of articles to be read for class. In my opinion I like the idea of not having to drive ever again, however I am aware that the technology still requires the on board driver to be present in case of an error occurs. Honestly though I would not feel comfortable riding in a self driving car at this time because I do not think that any computer can accurately predict what the human driver behind the wheel of the cars in front of or behind your vehicle will do. Until all cars are self driving I do not think I will ride along in a self driving car. If all the vehicles on the road where self driven and in someway linked to be aware of each other, then I might feel ok riding in a self driving car. The issue is that even with that, I still think they would be dangerous to pedestrians. For now there are operators in the car in case of malfunction but it is implied that someday the idea is to be rid of the operator at some point in the future. If that is the case then all it would take for an accident to occur is a pedestrian to J-walk. I understand that there are sensors in place to recognise if a person is walking in front of the vehicle, but what if it is dark? What if the sensor is dirty? The software fails to recognise the pedestrian? The point I am making is that even though a self-driving car may actually be safer, a lot of people, myself included, may not feel comfortable yet to put their lives in the hands of an autonomous vehicle. There are simply too many situations and variables that need to be considered when programing a self-driving vehicle. What if the stoplight is broken? Flashing? Not standard in design? What if there is a green light or a red light that the computer mistakes for a stoplight? There are so many questions and situations that need to be addressed when driving and unlike with a human driver, a program has to be able to know about all of them all the time as well as the exceptions to those rules and the exceptions to the exceptions to those rules. As an example I will use the the example presented to us in class. If an accident is unavoidable, does the car forgo your safety to to protect the civilians outside the vehicle or potentially harm pedestrians to save the passengers of the car? There is no right answer to this question, either way the car is going to get in an accident and either way someone is going to hurt or killed. So what does the car do? What does the driver do? Until there is a definite and satisfactory answer to this and any other concerns, I do not feel that actually self driving cars can be implemented.
https://techcrunch.com/2016/12/14/ubers-self-driving-cars-start-picking-up-passengers-in-san-francisco/?ncid=rss Today I am writing my thoughts on the The Telegraph article titled Internet trolls replace racist slurs with codewords to avoid censorship. The article talks about how users on various social media replace common racial slurs and negative words with other “inoffensive” words to get around censorship. In my opinion this shows an underlying flaw in the system used to monitor these sites. The issue is that these sites are only searching for words and not the context of the word. This is an issue because it disallows the use of certain words that may have another inoffensive meaning while not actually solving the issue of abusive chats. The article makes it seem that they have buckled down on any word that might be viewed as offensive. This seems to include words such as gay, which is not only a term for a homoexual indivdual but also a word meaning happy. In the current system, a comment stating that “This post makes me feel gay.” would be flagged as it would appear to the automated system that it was being abusive instead of it being seen in its actual context of the above message making the commenter happy. Meanwhile on the opposite end of the spectrum we have a comment mentioning skittles and car salesmen that gets by because no offensive words were used in the comment but was clearly meant to be offensive or derogatory. This shows to me that an automated system cannot be used to regulate a comment section. In my opinion a comment section may simply need to be regulated by a moderator and comments would need to be approved on a comment by comment section or as some content creators on Youtube have decided, not allow comments at all. As much as I like to read other people’s views on content and articles that I have viewed or watched myself, It is my opinion that there is no way to monitor all those negative comments in a timely manner and that no automatic system can accurately prevent abuse. It is my stance that those who have removed their comment sections may have the right idea. As harsh as it sounds, most people do not comment on content anyway. The vocal minority is often the ones who comment and post abuse while the majority usually just watches the video or reads the article and leaves. It appears to me that all comment sections do is give people a way to spew abuse and hate while those who actually comment constructively are targeted by those who just want to troll or abuse others. At this time I feel there is not a practical way to monitor a large chat. A small comment section or chat room can be looked over by a moderator but that does not become practical way to monitor a chat that gets to large. As I stated prior, as much as I like reading comments, sometimes it may be better to simply turn it off, if there is nothing nice to say do not say anything at all.
http://www.telegraph.co.uk/technology/2016/10/03/internet-trolls-replace-racist-slurs-with-online-codewords-to-av/ Since I am new to blogging I will start with my thoughts on the article The Code I’m Still Ashamed of. The article tells the story of a young programmer who for one of his first job designed a website for an unnamed drug. On the website was a survey that was supposed to recommend a drug based on the information provided but instead always recommended the drug unless the survey taker claimed to already be taking the drug or they are allergic to the drug. Though the programer did was was required of him he later learned that someone who had been taken the drug had committed suicide. The question then posed by the article is whether or not the programmer did the right thing in coding the survey.
From the client's point of view the programer did everything they asked of him and that is important. As a coder it would not reflect well if you go against the client’s wishes. Doing so could reflect badly on future contracts and might even get you blacklisted from certain companies. If the programer had a problem with what was being asked of him he should have refused the contract from the outset. From an Ethical standpoint, the coder should not have made a fake survey. He understood that the survey was designed to deceive those who would be visiting the website. Despite knowing this he did so without hesitation and did not even consider the consequences until he was alerted by a friend about someone on the drug committing suicide. Despite feeling guilty about it he did not address his colleagues about it and left the site as was. Honestly I feel this was a learning experience for this programer, and an invaluable one at that. He learned that his actions held consequence and resolved to look at what he was coding ethically in the future. The catch to all this though is that the programer only learned this lesson at the cost of another person’s life. It was a horrible wake up call for him and one that should not have been needed. In truth I am glad I read his story and it served a valuable lesson but I do not believe that the story reflects well on the programer. To conclude, the programer should not have even accepted the contract from the pharmaceutical company that hired him. The initial contract he accepted did not give enough information and he should have called his would be employers out on it before starting work on the site. Following that, when he got the details of survey, he should have refused the contract entirely. I mentioned above that he should not go against the client's wishes when building their sites but that does not mean you can not quit the job. Honestly it reflects poorly that he did not realise what he had done was wrong until someone committed suicide as an indirect result of his actions. https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e#.z44bof9di |
AuthorIan Kindall a CD major emphasizing in Game Design Archives
May 2017
Categories |