Posts Tagged ‘alan turing’


Searle’s Chinese Room Argument

February 25, 2012

See... philosophers smile too.


“The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.” (from Stanford Encyclopedia of Philosophy)

Searle does not argue that a machine could not think. According to Searle, “Strong AI” is the idea that machines, given the right formal program, could BE minds that have understanding and other cognitive states. Upon producing such intelligence we could explain these processes of the human mind. Intentionality cannot be reproduced with any formal program and therefore strong AI is false as defined throughout the argument as a program that can produce intelligence. Intentionality [aboutness] and understanding are linked. To understand anything the symbols must stand for something. If a machine does not realize what a symbol is means, it can understand neither the input, the action it is performing on the input, not the output. Biological and mechanical machines can think but only minds can understand. Though not specific, Searle does point to physical-chemical processes for an explanation of the mind. Only machines, built like human brains to function like them, can have the same understanding as the human mind.

There are six replies Searle gives to his argument. In this short section I am going to explain with of them I find most powerful and why. One of Searle’s complaints was that no formal program could replicate the intentionality necessary for understanding. I agree that an immovable, perceptionless machine will never be able to fully understand the world without those perceptive senses. The perceptive senses are what is necessary for the construction of semantic meaning. How could one connect the word “hamburger” with and actual hamburger without such connection being presented. A combination of the robot reply and the systems approach is the most powerful at answering this problem.The robot reply includes some of the complexities associated with human intelligence. This argument would have the most potential for something like human understanding as long as the program (as the brain of the system) could be produced to enable the machine to understand its worldly interactions. The robot approach would allow the machine to make contact with objects outside itself and the systems approach to associate the symbols presented to it with the “real world” object.


Turing Test

February 25, 2012

“Computing and Machine Intelligence” by Alan Turing (1950) as published in Haugeland’s “Mind Design II”.


Bio Pretty Impressive and Sad…

This may be a long, boring one. If you don’t want to read the post below, at least read the Biography above. He’s concise red the father of modern computing, cracking the Nazi Enigma Code for the British in WWII effectively winning the war, or at least the security of the country. He was arrested in ’52 for homosexuality, taking his own life two years after. Maybe I’ll try to study up on him a little later on and write something of my own about the guy.

-Alan Turing describes the Turing Test as testing for intelligence and for thought. Turing’s idea of machine intelligence is simply the perfect mimicry and imitation of humans. The machine must give the answers a judge would deem appropriate of a human and in a similar amount of time. If the machine can behave like a human it must have some intelligence.

My definition of intelligence is the propensity to learn new facts, opinions, views, etc. while thought is the function of how we use that knowledge to synthesize new ideas and give more meaning to those we already have. Thought involves the combination of ideas into new ideas and ways of understanding. So, intelligence is a necessary condition of thought. Thought is what is built from our intelligence.

If the imitation game is merely to imitate human conversation, then it is not a practical means of assessing machine intelligence under my definition of intelligence. Turing’s definition is exceedingly narrow. Under his definition it would seem to be a viable means of proving machine intelligence. If my definition was held, though it may measure some degree of conversational intelligence, it would not be an adequate test of a machines true overall intelligence.

An intelligent machine might believe that were it to pass the Turing Test, people would take it apart to see how it worked. So it might intentionally fail. If a machine were to believe that, the machine would be fearing for the cessation of its own being. The possession of the knowledge of its own existence should be considered as having a personality and therefore the machine would be thinking. This reaction by the machine, though undoubtedly programmed, would be totally separate from all other human interaction and influence. Such thoughts, if able to be known by us, would be undeniable evidence of an intelligent machine.

Five of the ten judges in the First Turing Test thought that a version of Weizenbaum’s program was human. So naive humans might be said to be too gullible for Turing Test purposes. Suppose the government decided to make Weizenbaum’s ELIZA program vastly larger by adding more and more canned responses and developing hardware to get the machine to deliver the canned responses quickly. The resulting SUPERELIZA program, still a bag of tricks– that is, responses whose every detail was thought of by the programmers–might be thought to be intelligent even by judges who are wise to the ELIZA tricks.

The machine would still not be intelligent. While it could respond to any question posed to it, it would not be able to “think” as I defined it above. Processing a question and searching for an appropriate answer would be a repetitive machine that would have no use but answering question. Not that that’s completely useless, Siri for the iPhone has many uses, including answering questions, giving directions, etc.

An intelligent cave-person might be very good at telling men from women in the imitation game, but nonetheless hopeless at telling people from machines because of lack of familiarity with technology. With such a judge, unintelligent machines- -even iPhones–may consistently pass the Turing Test. Further, it will do no good to specify that the judge be selected randomly, for in a cave-society (where everyone is unfamiliar with technology) unintelligent machines may consistently pass, and thus will be intelligent, relative to that society, according to the Turing Test conception of intelligence. Of course, an unintelligent machine such as an iPhone will be incapable of the genuine thinking that the cave people manage easily, e.g., figuring out where to find food, understanding why the fire went out, and the like. So the machine won’t be genuinely intelligent, even by the standards of that society.

-The Turing Test seems to act as a sort of poll of the population to determine intelligence, which there should be a consensually agreed upon scientific definition for. If intelligence depends on the society, or the time for that matter, then we have failed at providing a true definition and test of intelligence. Something must be said about encyclopedic knowledge, however. Humans do not individually possess encyclopedic knowledge and I’m not sure if computers should be expected to possess that sort of knowledge that would be considered uniquely human. The computer should only know what is relevant at the time. If I were transported to the caveman days, I would surely be considered stupid due to my lack of practical caveman hunting and gathering knowledge and be dead in a week’s time. Conversely were I too cold or quick with my answers, I may run a risk of being determined a machine.

The last two objections depend on the possibility that the judge may lack the abilities necessary to discriminate intelligent machines from unintelligent ones. Is there some way of specifying the nature of the judge so as to avoid such problems?

The depth of human intelligence cannot be truly simulated by the Turing test or any other behavioral test. Conversational intelligence, the ability to effectively communicate with a human, seems to be the only skill exercised and proven by the Turing test. If another behavioral test was to be proposed that limited itself to a single human characteristic, instead of the full breadth of human complexity, it would be too narrow to judge machine intelligence comparative to a human. A true test if behavioral would seem to need (you may want to sit down) a replicant of some sort. Yes, like Roy or Rachael in Blade Runner. For a machine to be able to fool humans across the board it must be able to empathize with a human being and perform the functions of a human in an entirely convincing manner it would have to share the experiences of a human. Just as we can’t know what it is to be a bat, a machine cannot know what it is to be a human, no matter how complex and thorough the programming. When the time comes, the judge must be the general public. If the machine can assimilate itself in with the population it would almost certainly be said to replicate a human. The other option would be the 2001: Space Odyssey one. I will have to think about the Hal option and get back to you guys…

%d bloggers like this: