Asimov Decreed "The Law of Robotics" — His Robots Never Revolted




There's no doubt about it, with all new technology, ethical questions will be raised. There was a time the horse and buggy became threatened by the automobile; it went too fast, they scared the horses, someone was going to be killed, was it okay to put humans and animals at risk for the sake of easy travel? Those same ethical questions can be applied to artificial intelligence in the future, twice fold.


Imagine... it's the year is 2710, artificial intelligence has dominated the science world. Now, there is an artificial intelligence equal to man, mentally. Physically he's superior to man. From outward appearances, they are indistinguishable from humans, the technology that advanced. They have gained consciousness and exist to serve mankind, like slaves.  

The ethical question is, will we use artificial intelligence for good or for evil. Will we use them in the military? Is it ethical to build an artificial human with it's own conscience to protect us? Could their conscience allow them to kill humans, their creators? 


We will have to careful when developing artificial intelligence and make sure it benefits mankind before we learn that it could harm us. That's why artificial intelligence has to follow Asimov's, Law of Robotics. Isaac Asimov was a novelist. His robots did not revolt.






Asimov's Law of Robotics



1.    A robot cannot injure a human being or through inaction allow a human being to come to harm.


2.    A robot must obey all orders except when any such order conflicts with order #1.


3.    A robot must protect itself except when it conflicts with orders #1 and #2. 


Artificial intelligence may one day become equal to humans, but if we follow the Law of Robotics then they will never be able to become our enemy.









*If you like my blogs check out my book "ONE TWO ONE TWO a ghost story, on sale at Amazon only $2.99 on Kindle  or read it for free join Amazon Prime



Dog Brindle













1 comment:

SaltHeart said...

No but R. Daneel Olivaw did become a subversive, de facto, leader of the empire; a veritable "Wizard of OZ" (i.e. the man behind the curtain).
And they evolved (the zeroth law). With a philosophical concept like the zeroth law incorporating "harm" and "Humanity" we end up with the same problems we have now. who decides what and how much harm is acceptable for the benefit of humanity which is undefined (1800's USA did not consider the Negros part of humanity, The orthodox jews consider non jews and lower form/class of human...). And how do these laws protect the rest of the biosphere except as it is useful in service to "humanity"?

http://en.wikipedia.org/wiki/R._Daneel_Olivaw
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics