Issac Asimov's Three Laws of Robotics (2024)

The Importance of Issac Asimov's Three Laws ofRobotics

Philip M. Wells

Many science fiction authors have considered the idea that oneday, "intelligent," mechanical beings could be physically, as well asmentally, superior to humans. These authors also often wonder whatwould happen if these robot beings simply decide that humans areunnecessary.

To help alleviate this problem, Issac Asimov proposed the ThreeLaws of Robotics, which state: 1) A robot may not injure a humanbeing, or, through inaction, allow a human being to come to harm. 2)A robot must obey the orders given it by human beings except wheresuch orders would conflict with the First Law. 3) A robot mustprotect its own existence so long as such protection does notconflict with the First or Second Laws. Asimov's idea is that theserules are so deeply embedded into the "brain" of every robot made,that if a robot were to break one of the rules, its circuitry wouldactually be physically damaged beyond repair. Assuming this istechnically possible, and was embedded in every robot made, theserules are the only thing that would be sufficient to keep robots fromtaking over the control of the world from humans.

Consider a robot that is physically superior to humans. It canmove faster, is far stronger, won't "break" as easily, and doesn'ttire. It is also quite aware of its surroundings via sensory devicessimilar to human's, but potentially much more accurate. These robotscould communicate by a very fast wireless network, and be solarpowered. The thought of such a machine is not that far off, a decadeor two at most.

Now consider that this robot has been programmed by some derangedperson to kill every human that it sees. There is little a singlehuman could do to stop it. A group of humans could defeat a fewmachines, but the machines would have access to all the same tools ashumans would, such as guns and atomic weapons. In the end, if therewere enough machines, people might stand little chance of survival,unless they were armed with robots of their own.

The only area where humans would really hold the upper hand wouldbe in intelligence. The robots could not really "think" forthemselves, and would not have the ability to adapt to new humantechniques that would eventually be discovered to destroy the robots.

If the deadly robots were programmed to consider it nearly asimportant to keep from being destroyed as to kill people, and wereprogrammed to look for deficiencies in themselves and their tactics,then it would turn into a battle of who could think and adapt faster.

Today, humans easily have the advantage as far as sheer brainpower over that of silicon. However, because of the rapid rate atwhich computers' power increases, it has been hypothesized thatsuper-computers will surpass the performance of the highly parallelhuman brain in as little as 20 years. Even considering a moreconservative estimate of twice that, 40 years is not a long time towait for a computer that is as powerful physically as a human mind.

That is not to say that these computers would be superior tohumans mentally. Humans would still have the ability to "think" thatthe computers wouldn't. However, given a good program that wouldallow the robots to adapt to new situations and the sheer processingpower of these machines, humans would have a distinct disadvantage. Alarge number of machines such as these could easily take over controlof the Earth.

There certainly are a huge number of factors that haven't beenconsidered, but the point is that the controversial idea of robotsactually thinking for themselves is not even relevant. In thisexample, well programmed, but non-thinking robots could potentiallytake over the Earth.

So, consider what happens if man could create an "intelligent"computer that is more or less modeled after humans. It could be"aware" of its existence, have a "desire" to survive, a desire to"reproduce," and be in a mechanical shell that is physically superiorto humans. This computer might not be "conscious," nor does it haveto have a "soul." It just has to be programmed with these and othercharacteristics. This computer will know it's capabilities and thoseof man, and will know the weaknesses as well.

These computers as a collective unit may decide that humans havemucked up the Earth enough. If they (the robots) are going to survivefor any length of time, humans must be removed. To put it bluntly, ifthis happened, we'd be screwed.

Though the idea of thinking robots, or even non-thinking ones,taking over the Earth may seem far-fetched, the idea of robotsprogrammed to be malicious is not. Even the ability of a robot tokill a few people should be a concern.

This is where Asimov's rules of robotics come into play. Theprospect of hard-coding these laws as deeply into these robots asAsimov talks about may be technically difficult to achieve, but I amsure that there would be a way of implementing something similar.Doing this ensures that robots would be the slaves of man, ratherthan the other way around.

One concern of Asimov's laws is that these slave robots wouldphysically create other robots where the laws were not embedded intotheir circuitry. However, this is not possible, since these slaverobots could not have the "desire" to create robots that couldpotentially harm humans. If they did, according to Asimov's firstlaw, they would be damaged themselves. Knowing that they would bedamaged, they couldn't go through with it, because this would violatethe third law.

The biggest problem of Asimov's laws, though, is that they canonly be completely effective if every robot or computer was deeplyembedded with them. The prospect of some humans creating a robot thatdid not abide by Asimov's laws is a matter of real concern, as muchas the concern of humans creating some other weapon of massdestruction.

But humans will be humans no matter what anyone does. There issimply no way to keep humans from killing themselves, no matter whattools they have at their disposal. Surely there would have to besevere penalties for the person that attempts to create a robotwithout these laws. But, this doesn't solve the problem.

The importance of Asimov's laws is clear, nonetheless. A slightlyderanged computer that is mentally more powerful than a human couldcreate an even more powerful and deranged computer much faster thanhumans could create something in defense. By implementing Asimov'slaws, a deranged computer couldn't exist. And a "good" computer wouldonly create other, better, "good" computers.

Issac Asimov's Three Laws of Robotics (2024)
Top Articles
Latest Posts
Article information

Author: Rev. Porsche Oberbrunner

Last Updated:

Views: 5648

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Rev. Porsche Oberbrunner

Birthday: 1994-06-25

Address: Suite 153 582 Lubowitz Walks, Port Alfredoborough, IN 72879-2838

Phone: +128413562823324

Job: IT Strategist

Hobby: Video gaming, Basketball, Web surfing, Book restoration, Jogging, Shooting, Fishing

Introduction: My name is Rev. Porsche Oberbrunner, I am a zany, graceful, talented, witty, determined, shiny, enchanting person who loves writing and wants to share my knowledge and understanding with you.