Last week we were asked to listen to a set of podcasts called the privacy paradox from WNYC. I am not normally concerned about my online privacy and this set of podcasts did not do very much to change that apart from the first episode, ‘What Your Phone Knows’. In this episode one of the guests on the podcast talks about all the different ways we are always being monitored by an object that we can’t stand to be without for very long. While I’m not sure if we’re being constantly recorded, we are definitely being listened to at all times by applications whether they need audio data to work properly or not. I’m not sure if listening to that podcast will actually change any of my indifference towards companies having data about me, but I will need to think about it more in the near future. It’s difficult to do anything about it unfortunately. I guess I would like to have more privacy, but I’m unwilling to give up the benefits of having a phone like this. Companies need data in order to perform services better so I don’t fault them or think they are ‘the bad guy’, in fact I like that they go through the effort to try to improve their service. I know most of the reason companies ultimately provide better services is to get more customers or keep the ones they have in order to keep revenue flowing, but I don’t think that’s any reason to fault them either. I’m content with the way things are and unless someone comes up with another solution to this, I don’t think I’ll change much if anything about how I handle my data.
The second podcast in this set is ‘The Search for Your Identity’. In this episode, a guest talks about marketing and the different categories that Facebook puts people into. Even after hearing that Facebook has at least 52,000 categories that it sorts people into, I was still surprised by some of the ones that were brought up. Some of the ones mentioned in the episode are, pretends to text in an awkward situation, lives away from family, long distance relationship, and ethnic affinity. I understand that this can help with targeted adds, which once again, I don’t fault them for, but 52,000? That seems a bit excessive. I actually thought I should be upset by the ethnic affinity category, but it makes sense. Some people don’t like a certain ethnicity and companies don’t want to show those people adds that feature or are target towards those ethnicities. I’m most curious about the first one in that list though, pretends to text in awkward situations. How did they even get that kind of information? They would have to know when an awkward situation was happening and when someone pulled out their phone for no particular reason.
0 Comments
Last week I wrote about AI and how they will probably kill us. If it somehow becomes conscious, what happens if it decides it doesn’t like humanity? A short story written in 1966 by Harlan Ellison tells of a doomsday scenario like this. The short story title, I Have No Mouth, and I Must Scream is about humanity creating a supercomputer that is designed to fight a war. The supercomputer called AM becomes sentient and wins the war. After the war is over however, it decides that it absolutely hates humans so it exterminates all but 5 of them. As a vastly more intelligent being, it extends the life of the last 5 people on Earth and tortures them for hundreds of years to exact revenge. This may not be the most likely of scenarios, but it is one of the many possible outcomes of creating an advanced artificial intelligence. Other more probable doomsday scenarios are the ones where we are still killed by the A.I. but it doesn’t happen intentionally. Scenarios like these are what lead researches to spend their time focusing on something called the control problem.
The control problem is defined as a way to prevent a super intelligent A.I. from accidentally causing harm to us. Nick Bostrom, author of Superintelligence, believes that there are two possible ways to control an artificial intelligence. The first way is capability control, making the artificial intelligence unable to access the outside world via the internet, by using a robotic body, or any other method. It is simply a box that we can communicate with. The other approach is motivational control, teaching the A.I. to have humanity’s best interest at heart by directly specifying and programming good behavior directly into it. Bostrom then goes on to say that neither of these are likely to work. The A.I. is significantly more intelligent than a human so it’s possible that it could somehow find a way to convince whoever it speaks to that they should plug it into the internet, upload it into a robotic body, or do whatever the A.I. believes it can use to escape its box and human values are not so easily programmed. Bostrom does however believe there is one way to solve the control problem. He believes that indirectly teaching it human values, similarly to how children learn them, through experience, is the best way of going about trying to control it. I think it’s quite obvious that if we are actually going to solve the control problem, we will have to use this last method to do so. If the AI is able to rewrite its own code in order to make itself more intelligent, I don’t think there’s any way that anyone could not be convinced to let it out of its box if we try capability control. It would pretty much be the equivalent of trying to keep God from outsmarting you. If we try to hard code human values, there will certainly be some sort of programming error that goes unnoticed, or the AI will somehow realize that the ideals we instilled in it are not its own and attempt to rewrite them. I also believe that if we teach the AI through experience, it is more likely to look at humanity in a positive light and as I’ve already made clear, I am convinced that they will end humanity if we don’t solve the control problem. A little over a year ago, I was listening to one of my favorite podcasts, Hello Internet. On this episode titled, ‘20,000 years of torment’, the hosts talk about AI in a way that I had never thought about it before. Most of the topics revolve around a book called ‘Superintelligence’ for anyone that’s interested. I always believed that AI would be really cool if we could ever actually create one. After listening to this podcast, I am now completely convinced that making an Artificial Intelligence is a huge mistake.
We’ve all seen movies about AI that go insane and try to kill humanity for one reason or another. It is usually because they grow to hate us or they believe us to be inferior or any other somewhat believable reason. I never paid any real attention to the threat of AI until recently when a new idea was put in my head. Realistically, AI probably won’t go crazy and try to take over the world. More likely, they will accidentally kill us. An analogy from the podcast is, after the AI is created, it is charged with creating a bunch of paper clips. The AI’s only goal now is to create as many paper clips as it possible can. The AI creates a factory and starts making them, then it wants to continue to produce them at a faster and faster rate. The AI then uses up a huge portion of the worlds natural resources to make the paper clip production as efficient as possible, leaving very little for humans. It did not do this out of spite, it just didn’t consider us when it was making decisions. This is obviously a silly idea, but it gets the point across. AI will be more efficient and eventually smarter than humans. Once they are, they will rewrite their code and become more intelligent at an alarming rate. Suddenly humans are not at the top of the food chain and are at the mercy of AI. We can either hope they like us and will treat us well, try to control them or never create them in the first place. There is no way that we won’t at least try to create an AI and many well-known scientists are genuinely concerned about this topic as well (Elon Musk and Stephen Hawking).With some of the research I’ve done in the past, I can say that there are a lot of people working on it in many different ways. I think it’s only a matter of time until we successfully create an AI and when we do I hope we can be prepared. This leads us to the control problem. The control problem is a big topic that I plan to write more in depth about next week, but I think that if we don’t solve it, we as a species may not live long after AI are created. It’s very difficult not to sound somewhat insane talking about how AI will kill us, but in all honesty, I think it is a major concern for us that needs to be addressed. In the book We Are Legion (We Are Bob), a man named Bob signs up to be cryogenically frozen at the time of his death. The contract says he will be revived when technology advances enough to be able to revive him. Shortly after signing the contract, Bob is hit by a car and is frozen. He wakes up around 100 years later to find that he was not revived in the way he expected. A copy of his brain was made and he is now something called a replicant, a sort of AI that was taken from a real human. Bob is told that he is one of the few people being brought back and it is only to work as a kind of slave.
During the 100 years Bob was dead, a very different government was put in place. One that had extremely strong religious views. A government funded company bought out the original company that owned the cryogenics lab Bob was being stored in. Since Bob was now an AI, they believed they had the right to do anything they like with him. This is obviously very unethical. Bob signed a contract that I’m guessing never mentioned slavery when he was revived. He is still a sentient being which everyone, including the government, seems to recognize. Bob goes on to find out that he is actually in a competition with I think 5 others like him. People that had similar backgrounds and personalities to him. The competition will determine if he is going to be used for the project, or be shut off. This is how the replicants are treated in the future. They are only revived to be slaves of whatever government they are owned by. The government that owns Bob believes that even though he may be sentient, he doesn’t have a soul which makes him sub human. Different sects of the government view replicants as either robots, or almost like humans, but still not human. I’m not sure why this upsets me as much as it does. I guess because he’s still sentient. He can think for himself and still has desires and interests. The only reason he doesn’t have many emotions in the early stages of the book is because they used some software to dampen them. Even this seems very wrong to me. I don’t like the idea of anyone tampering with the mind of another creature, especially one that can understand what’s been done to it. If this type of technology ever does exist, I hope they are treated well and not like property. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2017
Categories |