Week 15 - Ethics

 Ethics

Act Utilitarianism

    Act utilitarianism is the single most beneficial theory in the world, period. It narrates that the 'right' course of action in any situation is one that would grant the most overall positive net gain to every party involved (or the least negative gain). This means that in every case, following the theory would provide the most good - being, on a surface level, the clear cut winner. However, it skips out the individual part - as long as one person's individual suffering would pay for a multitude of other people being happier, then it is all good in the eyes of this theory. Furthermore, it is very difficult to quantify just how 'valuable' an action is to any given person, as everyone has a different measuring stick and there is a whole lot to leave for interpretation in these scenarios. But how does this tie into IT?
    Well, for this topic, I will sadly have to be as bland as one possibly can be and talk about self-driving cars. I know, I know, very boring - but honestly ethics just aren't my thing. I am a very one-sided person when it comes to it, no matter how much I debate with other people - so I will always vouch for this. Self-driving cars are the most recent and clearest example of morality, and the statistics behind it are a great way to put a 'value' on any course of action or even a life. One great site that has come up with testing it is moralmachine.mit.edu. Using the statistics from the site as a baseline, we can calculate exactly how valuable the driver behind a car is compared to any combination of people, animals, ages, genders on the road. Getting stuck in an ethical dilemma does not provide any solutions - it only provides useless questions. Why are we even debating whether or not the driver should sacrifice his own life to save three innocent people? These types of questions should not even be debated over - their answer is clear, and anyone who says otherwise honestly seems like they are talking about justifying the murder of multiple individuals. A one for one example can be understood, but multiple people? 
    If we do not choose an ethical theory to follow, we'll never get anywhere significant on the topic of self-driving cars and other important things that AI will soon have to manage. However, this theory does bring out one issue. One that was ever-so-subtly hinted at way back in the day for example in the film 'Terminator'. Compared to the AI, we might be the most harmful factor. If we programmed all of our AI to be utilitarian, we would end up being classified as useless and be largely eliminated. What is the purpose of a human being once technological evolution catches up to the point where machines were able to learn at the same, or an even faster pace, than humans? Then again, what is the purpose of an AI without anything to provide for? Is the happiness of a human worth more than the environment? Is the life of a human worth more than the life of the native wildlife? Can we really even justify our own existence? Anyway that is the kind of existential dread spiral this topic leads into and it really isn't any fun to talk about, nor is it interesting to just write about without having an in-depth conversation with someone else. Everything in ethics is so incredibly subjective that the only thing a person can learn from talking about it with others is the others' views. That's it. No one will ever know any 'true' answer to ethics without also jeapordizing the entirety of society. It is futile. Just like our lives.
    Hope you have a nice week :}

Comments

Popular posts from this blog

Week 2 - Failure.. and success!

Week 14 - Accessibility