Review: I, Robot

Published Jul 15, 2016 (8 years ago)
Danger icon
The last modifications of this post were around 8 years ago, some information may be outdated!

When cleaning out my garage last month, I discovered in a box that I had a copy of I, Robot by Isaac Asimov. You're probably heard the "three laws of robotics" before, though the movie or elsewhere in science fiction. After having such a great time reading the Foundation Series, I figured I would give this one a try, and I was even more impressed than before.

i_robot_cover

In case you haven't heard them before, the three laws of robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

At first glance, these laws seem pretty straightforward and rigid. However, we discover that these laws, ingrained into every robot created, do not work as smoothly as they sound at times. These "not so smooth" scenarios are what plays out in a series of short stories that make up the book. The short stories are linked together through interviews with Susan Calvin, a robopsychologist who has been with robots since their early conception. To some extent the short stories trace the evolution of robotics as they are ingrained more and more into society and even run parts of it.

What fascinates me is that the laws aren't necessarily discrete rules to their programming, but more like weights that influence the decisions that a robot will make. It is the weight, and the individual robots experiences, that being to take shape in each chapter. Part of the book unravels almost like a mini mystery for me. Can I figure out what is going wrong before the people involved figure it out, and before it is too late?

Beyond the "mysteries" of each chapter are some deeper moral and philosophical issues that start to come into play. If a child loves a thing so much, and it serves her back, does it start to get some form of "personhood" by itself? How can you "disprove" a person is a robot when a really nice person exudes the same kind of behavior a robot would? Can you trust something that you made, which is imperfect, that has the potential (and potentially has) surpassed your own capabilities and reasoning?

All of these nuggets and more are packed into a book that was roughly 300 pages long. It was a quick and fascinating read and I just might have to go through it again to see if I've missed anything. I give it a solid 5 out of 5. Go grab a copy and give it a read!