I’m planning philosophy lectures for the coming semester. One of our topics is the nature of being human. We compare people to machines or computer programs and ask whether we are any different in principle. Is there something that humans have that a machine could never be programmed to have? Self-awareness, or free will, or feelings, or a soul, or something?
Last year when we got to this subject, I found it surprisingly hard to generate discussion because almost all of the class thought it was obvious that there is no real difference. Clearly, they felt, someday computers will be programmed to think just like (or better than) humans. They wondered why there was even a question. This is odd to me, because I am pretty sure that 30 years ago most people would have felt the opposite way about it.
So what has changed, that people’s first reaction to this question is so different that it used to be? At first I assumed it probably had something to do with all the impressive things computers do these days, and how much they are a part of our lives. It’s easier to believe a computer can think when you can google everything.
On reflection, though, I don’t think computer science achievements have much to do with it. I think what changed things is that Spock was replaced by Data: for four decades we’ve had a steady stream of movies and books and games in which artificial intelligence is a given. HAL, Deckard, the Terminator, the kid in AI … the list is very long. The self-aware computer / robot has become such a familiar literary trope that its plausibility isn’t even questioned anymore.
If so, it’s fascinating (and a little unnerving) the difference that mere storytelling makes to people’s most basic intuitions.
Obligatory xkcd comic: http://xkcd.com/375/