You can prove anything you want by coldly logical reason – if you pick the proper postulates
“…we’ve got to do something.” Donovan was half in tears. “He doesn’t believe us, or the books, or his eyes.”
“No,” said Powell bitterly, “he’s a reasoning robot – damn it. He believes only reason, and there’s one trouble with that–” His voice trailed away.
“What’s that? prompted Donovan.
“You can prove anything you want by coldly logical reason – if you pick the proper postulates. We have ours and Cutie has his.”
“Then let’s get at those postulates in a hurry. The storm’s due tomorrow.”
Powell sighed wearily. “That’s where everything falls down. Postulates are based on assumption and adhered to by faith. Nothing in the Universe can shake them.”
The Laws According to Isaac Asimov
Isaac Asimov was the prolific Russian-American writer who gave us the Three Laws of Robotics:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
from I, Robot
Smart laws, right? Except, with any set of absolutes there always seem to be wrinkles in problematic places. When it all has to work as expected, one unanticipated variable can trounce the dominant paradigm. Who can anticipate all parameters, especially if the thing is up and running with a mind of its own? It’s complicated.
Any good nerd knows that working on the fly has special risks: working in a live environment is not like error-checking within the controlled safety of a developer’s sandbox. The unanticipated is public, seen by the public, impacting and possibly influenced by conditions in the wild.
In today’s excerpt, Cutie (QT1) has sidestepped all three Laws by not believing Powell and Donovan are significant enough to be authority figures. This is trouble, because QT1 is the robot in charge of controlling the space station’s power system. Even worse, the station’s other robots agree with QT1’s logic and will obey only him.
Have you ever been curious about what it would be like to have the benefit of logic without bias or history? Sounds interesting, maybe even freeing. Maybe.
Doubts introduced by our personal complications can influence us to question our assumptions, pushing us into useful innovations.
Isn’t that how it worked for QT1?