Thinking Ai

Well known among conspiracy theorists, a notorious radio program called Coast to Coast AM drifts through the airwaves in the middle of the night. The host, Art Bell, used to discuss topics ranging from aliens to government coverups to bizarre World Weekly News stories that were assumed to be true for some odd reason.

I used to listen to the show as a comedy, laughing myself to sleep in the wee hours of the night at the ridiculous questions and claims. One night, as I listened, the guest was a hilariously disgruntled physicist who Art Bell had been hounding for over an hour with ridiculous questions. On the subject of quantum mechanics, he saw an opportunity to turn the topic away from time travel by discussing efforts to build a quantum computer. Art Bell asked, in his most foreboding tone, “Is it possible that if such a computer were built, it could take over the world and enslave humanity?” Exasperated, the physicist replied, “Well, I guess that’s theoretically possible, but I’m not really sure how it would happen. It’s just supposed to do math equations. It would be like asking if a giant calculator could take over the world.” This, I think, sets the tone for the question of whether machines could enslave humanity.

If cars were sentient beings, they would have some massive advantages over humanity. With their hard metal shells and their big crushing wheels and their inability to feel pain, it would be a pretty short battle between a man and his jeep. There would be no chance of outrunning it and few indoor safe havens that couldn’t be smashed into. But this scenario is not one that we’re concerned about (although Stephen King might be). Why not? Because cars only start when you turn the key in their ignition and they only move when you press the gas pedal down with your foot. There is no chance of the car attacking you unless somebody else is behind the wheel. The same can be said of computers.

The primary factor that differentiates humans from computers is the fact that computers only do what they’re programmed to do. They have a specific set of commands and responses and they perform as they’re intended to perform. I would argue that it’s certainly possible for a person to program a robot to take over the world, but that’s a fairly different situation.

Even programs designed to learn can only learn what they’re programmed to learn. If it’s set to “contemplate” what it should do next, it’s not actually thinking; it is choosing an action from a short list based on the parameters set forth for it to choose. If, for instance, it were programmed never to kill a human, the action “kill” would be restricted in the event of the parameter “human.” It couldn’t evaluate the human and decide to change its parameters due to the human’s illogical nature unless some psychopathic Spock programmed it to do that.

The common counter argument at this point is usually that artificial intelligence will magically poof this problem away, but I would suspect that such a thing will probably not ever exist. The unique nature of our consciousness is something that has developed slowly over millions of years of natural selection – I don’t find it likely that it’s something Lou will be building in his garage any time soon.

The best way to explain this is to examine The Turing Test, which is supposed to determine whether an AI is sentient. Alan Turing said that, if an AI and a real person were both to communicate with a test subject and the subject failed to identify which of the two was real, the AI could be deemed to be sentient. The critical mind can clearly see that this concept is completely bogus. Designing a computer to fool people into thinking it’s human isn’t intelligence – it’s a parlor trick.

To wrap up, I wanted to touch on John Searle’s famous Chinese Box thought experiment. The basic concept is that if you were to sit in a room with a book that translates one line of Chinese to an appropriate response in Chinese, no matter how many people from China you talked to or how convinced the people might be that you really knew their language, you would still leave the room unable to speak Chinese.

This is why computers cannot think. An arbitrary list of commands and responses will never provide a machine with the data to understand the motivations behind those actions – or for that matter to understand anything. The only way robots will ever take over the world is if someone gives me an army of robots to command.