I, Take It Back
(Cross-posted to the gay geek site)
I was expecting the new Will Smith vehicle I, Robot to be a travesty of Asimov's vision. Having finally seen the movie, with the lowest of expectations, I take it back. Actually, this was an excellent movie, staying remarkably close to Asimov's concepts. If you haven't seen the movie and don't want it spoiled for you, stop reading now.
I'm a long-standing fan of Asimov. His books were the first real sci-fi of any calibre that I read, and this site is named after a character in the Foundation saga, one of his series of books. Asimov was from the golden age of sci-fi: he wasn't trying to impress us with Gibsonesque cynicism, he wasn't making the philosophical allegories of Phillip K. Dick or the heavy-handed political analogies of Ursula Le Guin. He certainly wasn't trying to impress us with ultra-violence, gritty action, and indeed his books are pretty short on believable characters or realistic dialogue. Instead, like Arthur C. Clarke and a few of the other giants, Asimov was looking at tomorrow and trying to work out what it would be like. This is an activity that could be very boring, so it's lucky that he was also blessed with an excellent mind for plots (he also wrote detective novels), which helped keep readers hooked.
Asimov came up with any number of interesting concepts, the robots being just one of them, but probably the most famous. He theorized about all sorts of aspects of the social and economic effects of mechanized humanoid servants. Asimov wrote more than 25 books about robots, not counting his six-book Foundation saga which he tied into the robot novels right at the end. In the process he invented the Three Laws of Robotics, which since you're reading this I'll assume you know, since you've seen the movie. The three laws are still considered fairly seriously by people who design actual robots today.
Now really, I mean it. Don't read any further. You were warned. I'm going to spoil the books, too.
Asimov spent most of the 25 novels exploring all the possible ramifications of the three laws. You can break the first law by defining "human" too narrowly: a planet of racists defined human beings as being those with their particular accent. You can also break it by giving your robot too limited a definition of the word "harm": too sensitive and your robot becomes a jailor, too harsh and your robot can beat the crap out of you.
But the biggest problem with the three laws is not that they don't work, but rather that they are incomplete. This is the conclusion that Asimov himself came to, and surprisingly, this is the aspect of the laws that is explored in the movie. They explore one avenue quite early on: if a robot has two humans to save, how does it decide which one? The movie, for dramatic effect, slips up a bit here: if Will came to the decision that the child was more important than a mid-twenties cop, he had some basis for that decision. Dawkins discusses the reasoning behind that decision in The Selfish Gene: kids have greater reproductive potential, therefore we value them more greatly. The robot would have had this information, and would have attempted to save the child too. But hey, Will had to hate the robots for some reason, and a movie doesn't have enough space to bring in and explain all Asimov's concepts of Spacers versus Earth-dwellers and the caves of steel which led Elijah Baley to hate robots: we'll make do with the quick-and-dirty, "robot didn't save the kid" explanation.
The other aspect of the incompleteness of the laws is the one the movie finally addresses. What happens if, for instance, a robot saves Hitler from being asassinated, who then goes on to kill millions more? The robot has not directly harmed another human being, but its action has certainly led to the harming of human beings. In Asimov's novels, the robots themselves come to this conclusion and create the "zeroeth" law: a robot cannot harm humanity or, through inaction, allow humanity to come to harm.
Once again to my surprise, this is what happens in the movie too. The only difference is the route they take to prevent humanity harming itself. In Asimov's rather more cerebral and subtle novels, the robots fade into the background, clandestinely shaping and guiding the future of humanity. In the movie, they decide to take direct control. This is considerably more photogenic and a lot shorter -- a couple of hours, rather than the more leisurely paced 1,000 years Asimov gives the Foundation saga. But the conclusions and the laws are the same, which is why I give this movie a throughly unexpected thumbs up.
There are of course a few more niggles. Sunni, the robot with an entirely separate system not bound by the three laws, is thoroughly outside Asimov's canon but a decent enough concept. Viki is likewise a new concept -- Asimov imagined ubiquitous computing rather than central control -- but fair enough to satisfy the necessity for a photogenic explosion at the end of the movie. The fleet of robots swarming up the sides of the USR building is also wrong: a regularly shaped building attacked by a fleet of identical robots coming at regular intervals should have produced a regular pattern of robots. But again, that's much less photogenic. None of these are enough to spoil the movie.
Good things: the robot-vs-robot fight scenes were amazing and the totally alien, impossible acrobatics of the robots themselves was very giggle-worthy in its coolness. Will Smith is a lot more dishy than I remember. The use of the term "positronic brain", unexplained, was a nice nod to Asimov. The robots themselves are very close to Asimov's descriptions of the early models, although the leap from NS4 to NS5 took much longer. The somewhat stilted and robotic nature of the employees of USR, especially Miss Love Interest, were a good idea.
All in all: we likey.