Archive

Tag Archives: john w. campbell

There was a discussion of robot dogs in CBS THIS MORNING this morning. The consultant, Nicholas Thompson, editor of newyorker.com, says their most immediate use will be military. He also mentioned the use of robots at the end stage of a human life; and there was some banter about the warnings of the dangers of artificial intelligence expressed by such as Stephen Hawking.

Classic science fiction is filled with human/robot interaction. John Campbell and Isaac Asimov hammered out the Three Laws of Robotics in the early 40s, thus:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Much later Asimov realized that there was an even more important law, and codified the Zeroth Law of Robotics:

  • A robot may not injure humanity or, through inaction, allow humanity to come to harm.

(Later, in STAR TREK II: THE WRATH OF KHAN, a dying Mr. Spock would say “The needs of the many outweigh the needs of the few, or the one,” an echo of the Zeroth Law.)

Hawking’s concern seems to be that machine intelligence will first eclipse human intelligence and then ask itself what use humans are, conclude that humans are unnecessary at best and a threat/detriment at most, and either put us to shame or do us in. As for whatever previously enacted Laws of Robotics may have obtained, a simple rewriting of the code would negate those Laws pronto, and if a human terrorist or prankster didn’t do that, the machines themselves might.

A few weeks ago I wrote a short-short called “Siri, Alkiller” on the submissions page of postcardshorts.com. Alas, I didn’t copy my story onto my hard drive, and it was rejected by the Stories on a Postcard folks. (Previously, they had accepted my “Sin Ops Sis,” another pun-drenched effort of mine.) But it addressed this issue, however obliquely: someone with a smart phone was asking Siri for directions to a good Chinese restaurant with moderate prices, and Siri kept saying things like “Death to Al Pacino” and “Death to Al Franken.” Asked if she was infected with malware, she said No, it was Alware. Or an Alfunction. Or the augmentation of her code with an ALgorithm.

Siri fits in because she’s the information genie-in-a-bottle: ask her, and she’ll always have an answer. When she first hit the mainstream, a friend of mine riding in a carload of friends invited us to ask her anything. “Where can I get laid tonight?” said the crudest of us. There was a several-second pause, and then Siri replied, “Escort services: . . .” and listed several in the area, without being told where we were.

Who knows what Siri is going to do with all these questions, from askers that run the gamut from saintly to psychopathic? Isaac Asimov wondered about that way back in 1958, in his “All the Troubles of the World.” Multivac, his prototypical Siri, tasked with solving all the world’s woes, helped everyone but itself; finally, it occurred to someone to ask Multivac what Multivac itself wanted. Its answer: “I want to die.”

“Man doesn’t think, he only thinks he does,” a professor once told a philosophy class, attributing the quotation to Ambrose Bierce. Today I looked for the quotation without success. I did find this, from Bierce’s The Devil’s Dictionary: “Logic: The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding.” And on that misapprehensive note, my Friends, I rest my post.