WHEN CARS THINK TOO MUCH
The humor has never been lost on me. Whenever I hear the term, “artificial intelligence,” (AI) I am instantly reminded that intrinsic to the term is the word “artificial.” The obvious sticking point of AI is the seemingly insurmountable challenge of coaxing something that is artificial into behaving as if it is natural. Can AI ever be natural? This is exactly the challenge that Google’s self-driving cars are facing.
Self-driving cars are programmed to drive safely and by the book. The problem is that most people never drive safely and by the book 100% of the time. This creates an interesting and even dangerous disconnect between the self-driving cars and the human-driven cars ( The New York Times in The Kansas City Star . “Driverless Crash? Not Google’s Fault.” September 3, 2015, pp. A8–A9):
“ Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.
‘The real problem is that the car is too safe,’ said Donald Norman, director of the Design Lab at the University of California, San Francisco, who studies autonomous vehicles. ‘They have to learn to be aggressive in the right amount, and the right amount depends on the culture.’ ” (p. A8)
This can create situations that are downright funny. For example, a self-driving car might sit forever at an intersection as it perceives one or more of the incoming vehicles as moving too fast or not coming to a complete stop, thereby creating an unsafe situation into which the self-driving car refuses to drive.
I used to make fun of AI by referring to it as “so-called AI.” From now on, I will make fun of it by just saying, “it’s artificial! What do you expect?”

