A regular pastime of mine is making fun of artificial intelligence. AI can be a wonderful thing in as far as it goes, in as much as it can do, and in as much as we allow it to do. AI is powerful because–under the right circumstances–it can (apparently) replicate human thought quickly and easily. Simultaneously, AI has intrinsic weaknesses due to its obvious lack of human qualities such as empathy, consciousness, judgment, free will, and holistic thinking.
Alan Turing is considered to be the father of modern computer science. Although he died in 1954 his “Turing test” continues to fuel the constant quest of computer scientists in their goal of creating a computer so powerful that you or I would find it indistinguishable from a human being. The Turing test is said to be passed if in our texting to and from that computer, we would be convinced that we were communicating with another person instead of just a machine. Hence, the constant quest to write code that will pass the Turing test.
Lately some within the computing community have proposed that the Turing test has outlived its usefulness. This position derives from the growing idea that the Turing test itself is based more on deception than true intelligence or thought. Gary Marcus is the director of Uber AI Labs and a professor of psychology and neural science at New York University. He recently wrote about his team’s endeavors into this fascinating argument.
Two points from Marcus’ team especially capture my attention. One involves the debunking of the very idea that just one test (the Turing test) is genuinely capable of assessing AI (Marcus, Gary. “Am I Human?” Scientific American. March 2017, pp. 58–63):
“Initially we focused on finding a single test that could replace Turing’s. But we quickly turned to the idea of multiple tests because just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence.” (p. 63)
The other point is the intrinsic and pervasive superiority of your brain or mine over any AI system. This of course again underscores the idea that AI may be an impossible goal:
“Anyone who has ever tried to program a machine to understand language has quickly realized that virtually every sentence is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice.”
Well, back to one of my favorite pastimes. The next time you find yourself wishing that you were smarter than a computer, please stop. The truer “wish” would be that a computer was smarter than you. And as we all know, not all wishes come true.