It’s a staple of science fiction. 2001: A Space Odyssey; The Terminator; The Matrix; I, Robot. The plot: Humans create machines with artificial intelligence (AI). The machines become conscious. The machines turn on their human creators and kill or enslave them.
Popular movies and novels commonly reflect the hopes and fears of present-day society, even if they’re set in the distant past or future. And the fear of AI taking over the world is a very real one for some very smart people. Famous theoretical physicist Stephen Hawking issued an ominous warning that “The development of full artificial intelligence could spell the end of the human race.” Technological entrepreneur Elon Musk joined the chorus of fear, saying, “Mark my words: A.I. is far more dangerous than nukes.”
Others disagree. Computer scientist Michael Littman wrote an op-ed piece arguing that “the ‘rise of machines’ is not a likely future.” Computer Science professor Subhash Kak agrees in his recent article, “Why a computer will never be truly conscious.” Neuroscientist Anthony Zador and computer scientist Yann LeCun argue that since AI didn’t need to evolve in a competitive environment as humans did, it didn’t develop the survival instinct that leads to a desire to dominate others (see: “Don’t Fear the Terminator”). Besides, LeCun argues elsewhere, “One would have to be unbelievably stupid to build open-ended objectives in a super-intelligent (and super-powerful) machine without some safeguard terms in the objective.”
And so the debate continues.
Personally, I’m with the optimists. Yes, I enjoy an exciting apocalyptic sci-fi flick of the humans vs. robots variety. But in the real world, I don’t think machines will ever develop consciousness and enslave or exterminate humanity. Aside from the inherent scientific limitations of electromechanical devices, and the supreme stupidity of designing machines without safeguards, robots do not have a soul—and I don’t believe they ever will.