h1

Turing Test

February 25, 2012

“Computing and Machine Intelligence” by Alan Turing (1950) as published in Haugeland’s “Mind Design II”.

Link!

http://www.loebner.net/Prizef/TuringArticle.html

Bio Pretty Impressive and Sad…

http://www.turing.org.uk/bio/part1.html

This may be a long, boring one. If you don’t want to read the post below, at least read the Biography above. He’s concise red the father of modern computing, cracking the Nazi Enigma Code for the British in WWII effectively winning the war, or at least the security of the country. He was arrested in ’52 for homosexuality, taking his own life two years after. Maybe I’ll try to study up on him a little later on and write something of my own about the guy.

-Alan Turing describes the Turing Test as testing for intelligence and for thought. Turing’s idea of machine intelligence is simply the perfect mimicry and imitation of humans. The machine must give the answers a judge would deem appropriate of a human and in a similar amount of time. If the machine can behave like a human it must have some intelligence.

My definition of intelligence is the propensity to learn new facts, opinions, views, etc. while thought is the function of how we use that knowledge to synthesize new ideas and give more meaning to those we already have. Thought involves the combination of ideas into new ideas and ways of understanding. So, intelligence is a necessary condition of thought. Thought is what is built from our intelligence.

If the imitation game is merely to imitate human conversation, then it is not a practical means of assessing machine intelligence under my definition of intelligence. Turing’s definition is exceedingly narrow. Under his definition it would seem to be a viable means of proving machine intelligence. If my definition was held, though it may measure some degree of conversational intelligence, it would not be an adequate test of a machines true overall intelligence.

An intelligent machine might believe that were it to pass the Turing Test, people would take it apart to see how it worked. So it might intentionally fail. If a machine were to believe that, the machine would be fearing for the cessation of its own being. The possession of the knowledge of its own existence should be considered as having a personality and therefore the machine would be thinking. This reaction by the machine, though undoubtedly programmed, would be totally separate from all other human interaction and influence. Such thoughts, if able to be known by us, would be undeniable evidence of an intelligent machine.

Five of the ten judges in the First Turing Test thought that a version of Weizenbaum’s program was human. So naive humans might be said to be too gullible for Turing Test purposes. Suppose the government decided to make Weizenbaum’s ELIZA program vastly larger by adding more and more canned responses and developing hardware to get the machine to deliver the canned responses quickly. The resulting SUPERELIZA program, still a bag of tricks– that is, responses whose every detail was thought of by the programmers–might be thought to be intelligent even by judges who are wise to the ELIZA tricks.

The machine would still not be intelligent. While it could respond to any question posed to it, it would not be able to “think” as I defined it above. Processing a question and searching for an appropriate answer would be a repetitive machine that would have no use but answering question. Not that that’s completely useless, Siri for the iPhone has many uses, including answering questions, giving directions, etc.

An intelligent cave-person might be very good at telling men from women in the imitation game, but nonetheless hopeless at telling people from machines because of lack of familiarity with technology. With such a judge, unintelligent machines- -even iPhones–may consistently pass the Turing Test. Further, it will do no good to specify that the judge be selected randomly, for in a cave-society (where everyone is unfamiliar with technology) unintelligent machines may consistently pass, and thus will be intelligent, relative to that society, according to the Turing Test conception of intelligence. Of course, an unintelligent machine such as an iPhone will be incapable of the genuine thinking that the cave people manage easily, e.g., figuring out where to find food, understanding why the fire went out, and the like. So the machine won’t be genuinely intelligent, even by the standards of that society.

-The Turing Test seems to act as a sort of poll of the population to determine intelligence, which there should be a consensually agreed upon scientific definition for. If intelligence depends on the society, or the time for that matter, then we have failed at providing a true definition and test of intelligence. Something must be said about encyclopedic knowledge, however. Humans do not individually possess encyclopedic knowledge and I’m not sure if computers should be expected to possess that sort of knowledge that would be considered uniquely human. The computer should only know what is relevant at the time. If I were transported to the caveman days, I would surely be considered stupid due to my lack of practical caveman hunting and gathering knowledge and be dead in a week’s time. Conversely were I too cold or quick with my answers, I may run a risk of being determined a machine.

The last two objections depend on the possibility that the judge may lack the abilities necessary to discriminate intelligent machines from unintelligent ones. Is there some way of specifying the nature of the judge so as to avoid such problems?

The depth of human intelligence cannot be truly simulated by the Turing test or any other behavioral test. Conversational intelligence, the ability to effectively communicate with a human, seems to be the only skill exercised and proven by the Turing test. If another behavioral test was to be proposed that limited itself to a single human characteristic, instead of the full breadth of human complexity, it would be too narrow to judge machine intelligence comparative to a human. A true test if behavioral would seem to need (you may want to sit down) a replicant of some sort. Yes, like Roy or Rachael in Blade Runner. For a machine to be able to fool humans across the board it must be able to empathize with a human being and perform the functions of a human in an entirely convincing manner it would have to share the experiences of a human. Just as we can’t know what it is to be a bat, a machine cannot know what it is to be a human, no matter how complex and thorough the programming. When the time comes, the judge must be the general public. If the machine can assimilate itself in with the population it would almost certainly be said to replicate a human. The other option would be the 2001: Space Odyssey one. I will have to think about the Hal option and get back to you guys…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: