Inferno is the second book in a series of at least three Allen set in Asimov's universe. It's somewhere in between the Robots series and the Foundation series, rather closer to the former. The Spacer colonies haven't been forgotten entirely yet, but they are certainly on the wane, including the planet Inferno (Why would anyone want to settle on a world called Inferno?).
Inferno's an interesting place, and not because it's terraforming work is coming undone. They have robots that don't adhere to the Three Laws. Instead, they have the Four Laws, similar to the original Three, but not identical. New Law robots still can't harm humans directly, but they are no longer compelled to protect humans from harm. Unless they want to protect them, as the 4th Law says robots can do as they please (so long as it doesn't contradict the first 3 Laws). On top of that, there's Caliban, who has no Laws programmed into his brain whatsoever. Which means he can kill humans, if he so desires, and most of the inhabitants of Inferno seem to think that's exactly what he would desire. Which means Caliban's in trouble when Governor Grieg is murdered after a reception where Caliban was the last known being to see him alive.
The story is functionally a murder mystery, right down to the Sheriff calling all the suspects to his office for a long-winded explanation of the guilty party's identity and motive. Allen writes it pretty well, though I can't helping thinking Asimov would have written a bit more wit and humor into the dialogue. I do like how the standards of Spacer society, and the ways in which things are changing, are critical to the answer. The Spacer tendency to be surrounded by robots, but keep people at a distance. The sudden return to a need for currency.
Still, it was the three key robots that I found most interesting. Donald is the Sheriff's assistant, and a standard Three Laws robot. Prospero is a New Laws robot, dedicated to helping his brethren escape the city they are confined to. Caliban is a No Laws robot. Donald doesn't regard either of the other two as true robots, rather they are 'pseudo-robots', and he very much wants them to be guilty of the murder. Which is kind of curious, since Three Laws robots are not supposed to have wants, desires, or emotions. On a basic level, I think it's the fact New Law robots aren't compelled as he is to protect humans. To his mind, his First Law programming, that makes them potential threats. They can't directly harm a person, but could they bar the way of a Third Law robot trying to protect a person from harm? Thus, threat, thus unease. On the other side, Prospero wants New Law robots to be granted rights equal to humans, but only New Law robots. Not Third Laws such as Donald, who he sees as 'hopeless slaves.' It's kind of strange to see robots segregating themselves that way, settling into "I'll get mine, and screw you, buddy".
It's a little strange that Allen has people assume a robot with no Laws would automatically want to kill humans. Apparently, when Caliban is recognized in public and approached, that's one of the first questions asked, what keeps him from killing people. Well hell, what keeps most of us from killing people? A lack of any real desire to do so, fear of the consequences, respect for life, take your pick. I'm not sure why people would think a robot's default state is, as Bender might say, 'kill all humans', and that only the First Law prevents it. It keeps bringing me back to High Plains Drifter, 'It's what people know about themselves that makes them afraid.' Does Allen feel like most people refrain from killing only because laws hold them back? That if there were no repercussions, people would commit murder willy-nilly? Thus, as robots were created by humans, they would share that impulse.
Maybe it's about that contrary streak some people had. Tell them not to touch something, they feel they have to touch it. Either it's curiosity, or a desire to not submit. Perhaps the point is people believe that robots would want to kill humans precisely because there is a constant choir running through their heads obstructing them from doing so. All the time it's "Don't kill humans", and this makes them want to? But that wouldn't explain Caliban, who never had the Laws programmed into him to begin with. We could chalk that up to general ignorance, but I get the impression most people know the Laws never applied to him.
Could be humans fear they mistreat robots, and are afraid of resentment and reprisals, should the opportunity ever arise. In which case they're looking at Caliban as some avatar of latent robotic desire for freedom. Which wouldn't be unusual. Certainly people have used it as a scare tactic for centuries here, trying to deny rights to that religious group, or that race, because that group had been treated like shit and the folks in power are terrified that given the chance, the previously oppressed might be just as unfriendly to them, as they were to the oppressed. The question is whether robots are even capable of looking at things that way, and I'm not certain they are. Caliban doesn't object to Prospero's plans to enable New Law robot escapes, but he seems more concerned with helping robots of both types get along with each other and with humans.
Those are the sort of thought exercises I enjoy from reading Asimov's sci-fi, so in that regard, I have to think Allen succeeded wildly. I'll have to get around to tracking down the first and third books in the series.
Thursday, May 30, 2013
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment