The Turing test and the Chinese Room
I saw that Alan Turing was in the news today - UK gov't apologizes to gay codebreaker Alan Turing
Science dope that I am, I only know who Turing is from watching science fiction tv/movies .... stuff about his Turing test, which gives a guideline for assessing whether one is communicating with a machine or a person. Turing proposed his test in an article for Mind in 1950 - Computing Machinery and Intelligence - in which he claimed that an appropriately programmed computer could think. One of the most well known objections to that theory was the Chinese Room, a thought experiment by John Searle.
In the Chinese Room experiment, Searle gives two scenarios - in one, hidden in a room is a computer which has been programmed to accept input Chinese characters and to respond with other Chinese characters that its programming deems appropriate. It can do this well enough to fool the person outside the room, who is inputting the characters and reading the responses, into believing that he's conversing with a person who understands Chinese. This computer would pass the Turing test. Searle then asks us to imagine another scenario in which not a computer but a man sits hidden in the room, receiving the queries in Chinese and, using the programming instructions, then creating appropriate Chinese responses - the man would also pass the Turing test.
Neither the man nor the computer, however, understand Chinese but are only able to simulate understanding due to their programming. Searle argues that without understanding, neither the computer (nor the man) are thinking.
According to what little I've read, many think that the Chinese Room thought experiment doesn't disprove the Turing test. I wish I could explain why not, but to be honest, though I find this stuff interesting it's pretty much all Greek to me. I like better the Voight-Kampff test in Blade Runner for assessing whether or not someone's a machine -- You're in a desert walking along in the sand when all of the sudden you look down and you see a tortoise. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun beating its legs trying to turn itself over but it can't, not without your help, but you're not helping .... Why is that? ... :)
But I digress - one of the places I heard about the Turing test and the Chinese Room was in an episode of the tv series Numb3rs, where Charlie mentions them. The Math Behind Numb3rs blog run by Stephen Wolfram (of Wolfram Alpha fame) has a post that has some interesting comments on the subject. I've pasted the comments under the video clip of the episode.
****
From The Math Behind Numb3rs ....
[...] The Turing test is one of the simplest and perhaps best-known proposals for determining a computer's capability to display intelligence. It was proposed by the father of artificial intelligence, Alan Turing, in 1950. In the Turing test, an impartial (human) judge converses with two parties: a human and a computer (or, in Turing's language, a "machine") that has been programmed to attempt to appear human. If the judge is not able to determine which party is human and which is the computer, then the computer is said to pass the Turing test. (Note that it is not actually required that the computer mimic human speech, only that its responses be indistinguishable from those a human might make. For this reason, the communication is commonly restricted to take place via teletype, instant messaging, etc.). There are of course a number of additional specifications needed to account for the fact that the output of a sophisticated computer algorithm might be comparable to the writing of a young child (or even a non-native speaker of English). It is the latter case that is somewhat similar to the Chinese Room argument mentioned in this scene.
Turing predicted that computers would be able to pass his test by the year 2000. This prediction has proven somewhat optimistic since, as of 2007, no computing device has yet been up to the challenge. In fact, there is an annual competition known as the The Loebner Prize devoted to recognizing the software and hardware that comes closest to passing the Turing test.
John Searle laid out the Chinese Room argument in his paper "Minds, Brains and Programs," published in 1980. Ever since, this argument has been a recurring discussion point in the debate over whether computers can truly think and understand. Interestingly, the conclusion of Searle's Chinese Room argument is true despite the fact that it is based on a fundamental misunderstanding of the nature of computers and of computer programs. A prominent example of the fact that the Chinese Room argument does not hold is the life experience of Helen Keller, who managed to escape her own Chinese room, as discussed by Rapaport. In fact, an overwhelming majority of researchers believe that the Chinese room argument is utterly wrong (as discussed in Stevan Harnad's articles on the subject) and that the more interesting question to the field of cognitive science is why it is wrong.
There are a number of subtleties in making the Chinese Room argument. In particular, since Chinese is a pictographic (not a phonetic) language, if you don't speak it, you don't know how to write down the characters corresponding to what you just heard. (And even if you speak it fluently, you might still not be able to write it; most Chinese-speaking foreigners can write much less than they can speak.) So the Chinese Room argument requires Chinese characters (i.e., a text-only channel) as input for this reason. (Interestingly, even so, Chinese dictionaries are ordered based on the number of "strokes" in each character, and assessing what constitutes a "stroke" is something even native Chinese speakers do not always get "right.")
****
You can read more about the Turing test at The Stanford Encyclopedia of Philosophy.
Science dope that I am, I only know who Turing is from watching science fiction tv/movies .... stuff about his Turing test, which gives a guideline for assessing whether one is communicating with a machine or a person. Turing proposed his test in an article for Mind in 1950 - Computing Machinery and Intelligence - in which he claimed that an appropriately programmed computer could think. One of the most well known objections to that theory was the Chinese Room, a thought experiment by John Searle.
In the Chinese Room experiment, Searle gives two scenarios - in one, hidden in a room is a computer which has been programmed to accept input Chinese characters and to respond with other Chinese characters that its programming deems appropriate. It can do this well enough to fool the person outside the room, who is inputting the characters and reading the responses, into believing that he's conversing with a person who understands Chinese. This computer would pass the Turing test. Searle then asks us to imagine another scenario in which not a computer but a man sits hidden in the room, receiving the queries in Chinese and, using the programming instructions, then creating appropriate Chinese responses - the man would also pass the Turing test.
Neither the man nor the computer, however, understand Chinese but are only able to simulate understanding due to their programming. Searle argues that without understanding, neither the computer (nor the man) are thinking.
According to what little I've read, many think that the Chinese Room thought experiment doesn't disprove the Turing test. I wish I could explain why not, but to be honest, though I find this stuff interesting it's pretty much all Greek to me. I like better the Voight-Kampff test in Blade Runner for assessing whether or not someone's a machine -- You're in a desert walking along in the sand when all of the sudden you look down and you see a tortoise. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun beating its legs trying to turn itself over but it can't, not without your help, but you're not helping .... Why is that? ... :)
But I digress - one of the places I heard about the Turing test and the Chinese Room was in an episode of the tv series Numb3rs, where Charlie mentions them. The Math Behind Numb3rs blog run by Stephen Wolfram (of Wolfram Alpha fame) has a post that has some interesting comments on the subject. I've pasted the comments under the video clip of the episode.
****
From The Math Behind Numb3rs ....
[...] The Turing test is one of the simplest and perhaps best-known proposals for determining a computer's capability to display intelligence. It was proposed by the father of artificial intelligence, Alan Turing, in 1950. In the Turing test, an impartial (human) judge converses with two parties: a human and a computer (or, in Turing's language, a "machine") that has been programmed to attempt to appear human. If the judge is not able to determine which party is human and which is the computer, then the computer is said to pass the Turing test. (Note that it is not actually required that the computer mimic human speech, only that its responses be indistinguishable from those a human might make. For this reason, the communication is commonly restricted to take place via teletype, instant messaging, etc.). There are of course a number of additional specifications needed to account for the fact that the output of a sophisticated computer algorithm might be comparable to the writing of a young child (or even a non-native speaker of English). It is the latter case that is somewhat similar to the Chinese Room argument mentioned in this scene.
Turing predicted that computers would be able to pass his test by the year 2000. This prediction has proven somewhat optimistic since, as of 2007, no computing device has yet been up to the challenge. In fact, there is an annual competition known as the The Loebner Prize devoted to recognizing the software and hardware that comes closest to passing the Turing test.
John Searle laid out the Chinese Room argument in his paper "Minds, Brains and Programs," published in 1980. Ever since, this argument has been a recurring discussion point in the debate over whether computers can truly think and understand. Interestingly, the conclusion of Searle's Chinese Room argument is true despite the fact that it is based on a fundamental misunderstanding of the nature of computers and of computer programs. A prominent example of the fact that the Chinese Room argument does not hold is the life experience of Helen Keller, who managed to escape her own Chinese room, as discussed by Rapaport. In fact, an overwhelming majority of researchers believe that the Chinese room argument is utterly wrong (as discussed in Stevan Harnad's articles on the subject) and that the more interesting question to the field of cognitive science is why it is wrong.
There are a number of subtleties in making the Chinese Room argument. In particular, since Chinese is a pictographic (not a phonetic) language, if you don't speak it, you don't know how to write down the characters corresponding to what you just heard. (And even if you speak it fluently, you might still not be able to write it; most Chinese-speaking foreigners can write much less than they can speak.) So the Chinese Room argument requires Chinese characters (i.e., a text-only channel) as input for this reason. (Interestingly, even so, Chinese dictionaries are ordered based on the number of "strokes" in each character, and assessing what constitutes a "stroke" is something even native Chinese speakers do not always get "right.")
****
You can read more about the Turing test at The Stanford Encyclopedia of Philosophy.
3 Comments:
Interesting article, Crystal, but it is my understanding that the touring test has been passed by a computer quite some time ago. The Air Force spent quite a bit of time working on trying to develop artificial intelligence, and if my mind remembers correctly finally developed a program that responded sufficiently like a human that a psychologist could not tell if he was communicating with a computer or a person. Unfortunately the program responded as a paranoid schizophrenic. Still, I would think that would qualify as passing the Touring Test.
If I remember right, the Air Force concluded that the development of a program to mimic a sane person was probably impossible because the response of sane people tended to be random and therefore not predictable.
I experienced this to some extent with my ex-wife when she had a schizophrenic experience. I found her responses to be extremely predictable, both verbally and in physical reaction, so I tend to accept the Air Force's conclusion. I am sure that a lot of people are very uncomfortable with the concept that sanity means unpredictability :).
So which random key do I hit now?
Love and Hugs,
Mike L
Hi Mike,
Yes, I thought that there had been some AIs that could pass the Turing test too - maybe this was an older episode that the comment came from. He should know, though - he made that sort of smart search engine, Wolfram Alpha.
In a way I bet we're pretty predictable too, if enough info is known about us. I've read somewhere that though people have free will and can make unexpected choices, they do tend to be very predictable in behavior - can't remember where I read it, though.
But I'm not looking forward to a computer that can think for itself - having watched a lot of science fiction movies, I know that that scenario always ends badly for us :)
частное студенческое порно онлайн http://free-3x.com/ секс с малолеткой видео free-3x.com/ препод и студентка фото [url=http://free-3x.com/]free-3x.com[/url]
Post a Comment
<< Home