.

Wednesday, August 28, 2019

Ethics for a society of humans and automatons Essay

Ethics for a society of humans and automatons - Essay Example Forester and Morrison strongly suggest that â€Å"computer system have often proved to be insecure, unreliable, and unpredictable and that society has yet to come to terms with the consequences†¦.society has become newly vulnerable to human misuse of computers in the form of computer crime, software theft, hacking, the creation of viruses, invasion of privacy, and so on† (ix). The ethical dilemmas however do not rise simply for the fact that there are risks involved with the automatons. More than risks, when the automatons become largely entwined in the daily lives human beings on the earth, we have to deal with many more complex issues which ethically challenge the governance of such a world. Allen, Wallach and Smitt are of the view that â€Å"we can’t just sit back and hope things will turn out for the best. We already have semiautonomous robots and software agents that violate ethical standards as a matter of course. A search engine, for example, might collect data that’s legally considered to be private, unbeknownst to the user who initiated the query† (12). Three Laws of Robotics While we regard ethics in terms of automaton, it is necessary to look at Issac Asimov’s three laws of robotics. These laws were delineated in his famous 1942 short story ‘Runaround’. ... It means if a robot wants to protect in a given situation, it shall not be at the expense of harm to human beings. The ethical laws pertaining to moving machines are considered to be mechanical. Ethics is considered by definition to be anthropocentric. Ethics involves ruminations on living a life which is worthy to live. Asimov’s three laws are an important starting point in understanding machine ethics: â€Å"1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. 3. A robot must protect its own existence as long as such protection does not conflict with the first or second law† (as quoted in Anderson, 477-78). These laws as originally proposed by Asimov imagine automatons as slaves of human beings. Moreover, they are not even considered to be able to exit relatively independent of human beings. Asimov has â₠¬Å"provided an explanation for why humans feel the need to treat intelligent robots as slaves, an explanation that shows a weakness in human beings that makes it difficult for them to be ethical paragons. Because of this weakness, it seems likely that machines like Andrew could be more ethical than most human beings† argues Anderson (478). However, in the present world, the complex interactions take place between humans and automatons take us beyond the purview of these three laws concerning ethical governance of mechanized world. Altering the Ethical Man Albert Einstein put forward the question â€Å"Did God have any choice?† as the big question faced by humanity. In a society of automata, human beings are faced with another question. Did human beings have any choice?

No comments:

Post a Comment