Archive for the ‘Artificial Intelligence & Philosophy of Science’ Category

h1

Searle’s Chinese Room Argument

February 25, 2012

See... philosophers smile too.

Link! http://plato.stanford.edu/entries/chinese-room/

“The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.” (from Stanford Encyclopedia of Philosophy)

Searle does not argue that a machine could not think. According to Searle, “Strong AI” is the idea that machines, given the right formal program, could BE minds that have understanding and other cognitive states. Upon producing such intelligence we could explain these processes of the human mind. Intentionality cannot be reproduced with any formal program and therefore strong AI is false as defined throughout the argument as a program that can produce intelligence. Intentionality [aboutness] and understanding are linked. To understand anything the symbols must stand for something. If a machine does not realize what a symbol is means, it can understand neither the input, the action it is performing on the input, not the output. Biological and mechanical machines can think but only minds can understand. Though not specific, Searle does point to physical-chemical processes for an explanation of the mind. Only machines, built like human brains to function like them, can have the same understanding as the human mind.

There are six replies Searle gives to his argument. In this short section I am going to explain with of them I find most powerful and why. One of Searle’s complaints was that no formal program could replicate the intentionality necessary for understanding. I agree that an immovable, perceptionless machine will never be able to fully understand the world without those perceptive senses. The perceptive senses are what is necessary for the construction of semantic meaning. How could one connect the word “hamburger” with and actual hamburger without such connection being presented. A combination of the robot reply and the systems approach is the most powerful at answering this problem.The robot reply includes some of the complexities associated with human intelligence. This argument would have the most potential for something like human understanding as long as the program (as the brain of the system) could be produced to enable the machine to understand its worldly interactions. The robot approach would allow the machine to make contact with objects outside itself and the systems approach to associate the symbols presented to it with the “real world” object.

Advertisements
h1

Turing Test

February 25, 2012

“Computing and Machine Intelligence” by Alan Turing (1950) as published in Haugeland’s “Mind Design II”.

Link!

http://www.loebner.net/Prizef/TuringArticle.html

Bio Pretty Impressive and Sad…

http://www.turing.org.uk/bio/part1.html

This may be a long, boring one. If you don’t want to read the post below, at least read the Biography above. He’s concise red the father of modern computing, cracking the Nazi Enigma Code for the British in WWII effectively winning the war, or at least the security of the country. He was arrested in ’52 for homosexuality, taking his own life two years after. Maybe I’ll try to study up on him a little later on and write something of my own about the guy.

-Alan Turing describes the Turing Test as testing for intelligence and for thought. Turing’s idea of machine intelligence is simply the perfect mimicry and imitation of humans. The machine must give the answers a judge would deem appropriate of a human and in a similar amount of time. If the machine can behave like a human it must have some intelligence.

My definition of intelligence is the propensity to learn new facts, opinions, views, etc. while thought is the function of how we use that knowledge to synthesize new ideas and give more meaning to those we already have. Thought involves the combination of ideas into new ideas and ways of understanding. So, intelligence is a necessary condition of thought. Thought is what is built from our intelligence.

If the imitation game is merely to imitate human conversation, then it is not a practical means of assessing machine intelligence under my definition of intelligence. Turing’s definition is exceedingly narrow. Under his definition it would seem to be a viable means of proving machine intelligence. If my definition was held, though it may measure some degree of conversational intelligence, it would not be an adequate test of a machines true overall intelligence.

An intelligent machine might believe that were it to pass the Turing Test, people would take it apart to see how it worked. So it might intentionally fail. If a machine were to believe that, the machine would be fearing for the cessation of its own being. The possession of the knowledge of its own existence should be considered as having a personality and therefore the machine would be thinking. This reaction by the machine, though undoubtedly programmed, would be totally separate from all other human interaction and influence. Such thoughts, if able to be known by us, would be undeniable evidence of an intelligent machine.

Five of the ten judges in the First Turing Test thought that a version of Weizenbaum’s program was human. So naive humans might be said to be too gullible for Turing Test purposes. Suppose the government decided to make Weizenbaum’s ELIZA program vastly larger by adding more and more canned responses and developing hardware to get the machine to deliver the canned responses quickly. The resulting SUPERELIZA program, still a bag of tricks– that is, responses whose every detail was thought of by the programmers–might be thought to be intelligent even by judges who are wise to the ELIZA tricks.

The machine would still not be intelligent. While it could respond to any question posed to it, it would not be able to “think” as I defined it above. Processing a question and searching for an appropriate answer would be a repetitive machine that would have no use but answering question. Not that that’s completely useless, Siri for the iPhone has many uses, including answering questions, giving directions, etc.

An intelligent cave-person might be very good at telling men from women in the imitation game, but nonetheless hopeless at telling people from machines because of lack of familiarity with technology. With such a judge, unintelligent machines- -even iPhones–may consistently pass the Turing Test. Further, it will do no good to specify that the judge be selected randomly, for in a cave-society (where everyone is unfamiliar with technology) unintelligent machines may consistently pass, and thus will be intelligent, relative to that society, according to the Turing Test conception of intelligence. Of course, an unintelligent machine such as an iPhone will be incapable of the genuine thinking that the cave people manage easily, e.g., figuring out where to find food, understanding why the fire went out, and the like. So the machine won’t be genuinely intelligent, even by the standards of that society.

-The Turing Test seems to act as a sort of poll of the population to determine intelligence, which there should be a consensually agreed upon scientific definition for. If intelligence depends on the society, or the time for that matter, then we have failed at providing a true definition and test of intelligence. Something must be said about encyclopedic knowledge, however. Humans do not individually possess encyclopedic knowledge and I’m not sure if computers should be expected to possess that sort of knowledge that would be considered uniquely human. The computer should only know what is relevant at the time. If I were transported to the caveman days, I would surely be considered stupid due to my lack of practical caveman hunting and gathering knowledge and be dead in a week’s time. Conversely were I too cold or quick with my answers, I may run a risk of being determined a machine.

The last two objections depend on the possibility that the judge may lack the abilities necessary to discriminate intelligent machines from unintelligent ones. Is there some way of specifying the nature of the judge so as to avoid such problems?

The depth of human intelligence cannot be truly simulated by the Turing test or any other behavioral test. Conversational intelligence, the ability to effectively communicate with a human, seems to be the only skill exercised and proven by the Turing test. If another behavioral test was to be proposed that limited itself to a single human characteristic, instead of the full breadth of human complexity, it would be too narrow to judge machine intelligence comparative to a human. A true test if behavioral would seem to need (you may want to sit down) a replicant of some sort. Yes, like Roy or Rachael in Blade Runner. For a machine to be able to fool humans across the board it must be able to empathize with a human being and perform the functions of a human in an entirely convincing manner it would have to share the experiences of a human. Just as we can’t know what it is to be a bat, a machine cannot know what it is to be a human, no matter how complex and thorough the programming. When the time comes, the judge must be the general public. If the machine can assimilate itself in with the population it would almost certainly be said to replicate a human. The other option would be the 2001: Space Odyssey one. I will have to think about the Hal option and get back to you guys…

h1

The Big Questions! (for a philosophy nerd)

February 25, 2012

Gimme a hug! Or I WILL rip your arms off...

I thought it would be good to define a few terms for my AI class just so I knew a little better where I stand on these issues. How can I have an opinion about a machine or a mind if I can’t come close to defining what I’m thinking of.

The definitions below are my own. These are absolutely up for other interpretations. And as always I would love to hear if anyone disagrees… I don’t know what these are any more than anyone else.

What is a Machine?

A machine is an artifact that performs a task as instructed by an operator (brain, central processor, program etc.).

What is a Mind?

A mind is the combination of all mental traits of a being as they are thought and felt by that being. Intelligence, reason, memory, thoughts, emotions, some degree of self-awareness.

What is a Person?

A person is a mentally autonomous being, that is, has a mind. It has self-awareness, and cares for its own we’ll being. It is an end in itself according to Kant.

What is a Computer?

A computer is a machine that is able to perform complex computations via a program installed into and run by its hardware. It has memory storage, an active processor, and a program or instructions on what function to perform and under what circumstances.

What is a Robot?

A robot is a computer given some physical manifestation so that it may interact with the objects and area around it.

What is Intelligence?

Intelligence is a quality that determines one’s ability to learn. Factors include cognition, memory, and understanding (comprehension).

What is Thought?

Intelligence is a quality that determines one’s ability to learn. Factors may include cognition, memory, understanding and the synthesis of comprehended ideas into new ideas and understandings not contained in the original cognitive perception.

What is Emotion?

Emotion is what persons use, as a result of self-awareness, to judge the world around and within them. They play a role in forming our opinions and beliefs about everything that is and everything that could be.

What is a Right? (Who has them? Why?)

Rights are based on ethical concerns regarding how we agree to treat each other. All persons should be extended at least those rights that prevent unnecessary suffering and pain. As persons, we value ourselves as ends in ourselves and in turn should value others as ends. Any being that values itself and its interests must in some small way think of itself as an end.

h1

The Human Condition

February 1, 2012

Each is trying not to give himself or herself away,
each is preserving fundamental loneliness, each
remains intact and therefore unfructified. In such
experiences the is no fundamental value.
-Bertrand Russell

h1

Self-conscious, much?

January 30, 2012

Behaviorism isn’t all useless. I’m guessing I can think of a couple things that may be going through this gal’s head…

h1

Behaviorism

January 30, 2012

Gilbert Ryle published “The Concept of Mind” in 1942 accusing philosophers of mind of accepting the Cartesian Myth. Descartes myth was this: the mind is an inner sanctum that can only be known through introspection, physically manifested as mental talk. “I’m scared.” “I love my cat.” The myth challenged philosophers to come up with some account that could explain the inner sanctum relation to expression in the public world of people, objects and actions. Ryle’s example of the problem is like that of one being shown around a university campus. Here is the library, here are the lecture halls and the dormitories, and so on. When the tour concludes the man says, “Yes, but where is the university?” The man seeing all these pieces, fails to notice that the university is nothing extra. The buildings are the university.

For Ryle, the mind was simply its behavioral manifestations. Due to the organization of our mind, we have intrinsic behavioral dispositions that express themselves as these behavioral manifestations.

The mental talk, mentioned earlier, is merely a way to pick out behavioral dispositions. It picks out what so and so is likely to do in some particular circumstance, without appealing to any special state within an inner sanctum. Saying that salt is soluble in water is not to say that there is a spirit of solubility in salt. It just says that when salt is added to water, it dissolves. Our mind according to behaviorists is nothing but behavioral dispositions, be they simple or complex.

The Problems of Behaviorism:
1.) Can be seen as infinite or circular. Infinite if we have to predict every action of an individual. Circular if, when predicting actions we make irreducible references to some mental states.
2.) Tries to do away with mental states all together. Don’t we have some inner state? Inner feelings, pain, images, etc.?
3.) Behaviorism may be explanatorily shallow. Though all it says is that salt dissolves in water, can we not still ask how/why salt dissolves in water? Search for the nature of solubility? It seems to commit a “method actors fallacy.” Attributing certain neural states to anyone who displays the usual signs of said feeling while denying it to anyone who is able to suppress the usual signs. Just because I don’t appear to be in pain, doesn’t mean I’m not…

h1

Artificial Intelligence and the Philosophy of Science

January 24, 2012

The Terms… (Part I)

Here are some terms that are going to be important for exploring Artificial Intelligence. These are quick intro’s to several of the most prevalent theories of mind. We will later look at some of the fundamental issues, problems and opportunities they create for the subject of artificial intelligence

Dualism
In Dualism mind & body are separate and distinct substances. In the universe there is material stuff: stars, planets, trees, leaves, water, carbon and everything else we normally talk about with no complications and there is the other stuff (the stuff of our inner mind). When we examine our beliefs, thoughts, feelings etc. we don’t find much by way of the physical. None of these things are colored, heavy or any of the other attributes we typically associate with material objects. From this it seems perfectly natural to assume that there is a difference between our brain and our mind. If this is the case, what is the relationship between the physical body and the nonphysical mind. How do they interact with the other material objects in the world, and for that matter, how do they interact with each other?

There are three forms of dualism: (Don’t let the fancy names scare you, the explanations will do that!)
1.) Parallelism
2.) Epiphenominalism
3.) Interactionism

1.) Parallelism claims that the mind and body are clearly distinct and causally isolated (neither one has any affect on the other). The problem with this view is that the appearance of causal linkage. If the mind can only affect the mental and the physical can only affect the physical, how does getting smacked on the head, ala Eddie with a chainsaw, cause immediate, rather than delayed, disorientation? How are two distinct and causally unassociated materials able to synchronize so well? The parallelist states that the physical and nonphysical have been wound up like watches and set in motion at the same time so that at three o’clock both “watches” read the same time. A knock on the head doesn’t cause the confusion, just relates to it perfectly in time. So, the real question is who wound the clock and set them into synchronized motion to run in perfect harmony? If God wound the clocks, why did he decide on such a complicated and clumsy method of keeping us in check with ourselves?

2.) Possibly even weirder is the idea of, say it with me, epiphenominalism. This idea tries to solve the problem of the physical/mental separation. We all know that drinking too much can cause a shift in Mikey’s behavior and attitude (no explanations please). Epiphenominalists put forth the idea that the physical, e.g. Alcohol, can, indeed, affect the nonphysical (mental). The weird part is that the mental has no control over the physical. I think this one is a bit crazy. My wanting a beer is usually what encourages me to head to the liquor store. But these guys think that while feelings, beliefs and the like are caused by the mental they cannot cause the body to act. I was going to try to give an example, but really … what the hell?

3.) Finally! Number 3… This is the least crazy view of dualism and probably the most difficult to explain and argue against – Interactionism. Interactionism takes care of the mind/body causation problem. I would venture to say that many people in the world are Interactionist Dualists depending on their definition of “Soul” and how it contributes to our daily decision making. Proponents of Interactionism claim that the mind and body are distinct but causally integrated items. If true how do the two substances (physical and nonphysical) relate and cause action in the other? This type of Dualism is the view held by great philosophical grandad Rene Descartes. For any one asking who’s this dude, remember Cartesian coordinates from grade school (adding two or more points to a grid and drawing a line) and “I think therefore I am”? That’s this guy. He was an intellectual badass in his day. So, if Descartes believed it why shouldn’t we (other than an appeal to authority fallacy). Why was it given up?

There are two reasons:
The first reason is that the dependence of mental on physical is clear. As was the example used in Epiphenominalism, alcohol not only affects my ability to walk and speak without a slur (physical), it affects my mood and behavior(nonphysical). People sustaining injuries to various parts of the brain are affected in predictable ways this to modern science. A prefrontal cortex injury can cause the nicest most polite man in the world into inappropriate behavior (see link at botttom of page for a fun case study). So, a brain injury can actually change your overall personality – WHO you are… While a magnet attracting iron fillings is an example that indicates the power of nonphysical forces on the physical, affects on the brain indicate a clear and strong case for the physical’s affect on the nonphysical. Though these extreme examples aren’t exactly against what Descartes meant when he said the two have causal interaction, they aren’t exactly for his theory either (he believed the physical was controlled by the pineal gland at the base of the skull, not by the entire brain). So, there must be some brain/mind correlation, which I think is obvious to us moderns. That leaves two options. The mind is either somewhere in the body (brain or not) deciding what the body should do or that the brain and the mind are one in the same. This last option is what is called Materialism and in the end wins out for simplicity. It causes the least amount of recurring problems to continually solve.

The second reason against Cartesian Dualism is that its positive arguments are unconvincing. These include “how could” arguments and introspection or “just feel” arguments. Many would argue against materialist theories asking how a merely physical object could perform some action. Descartes himself thought that reason and calculation are things only a soul could do. The problem is that this many years later we have calculators that can do mathematics faster than any of the rest of us ever could and computers seem to be well on their way to human reasoning. As such people have already switched to saying such things as, “well, a computer could never feel happiness, sadness, etc.” but that isn’t even inconceivable (thanks Vizzini!). The introspection arguments say that we can look inside ourselves and and seeing what a feeling is like. They may say something along the lines of “I can just feel that this isn’t a brain state.” This argument suffers from a general weakness. If I feel queezy it may not strike me that the nausea is a mild case of salmonella, but it still may be.

Whew!
I’m glad we got through that! Kinda ridiculous, no? But you can definitely see why it stuck around so long. The problems of dualism, though serious, are very subtle and hard to pin down. Anyway, I think that should be enough for a day. Next will be, Behaviorism, the first theory to seriously challenge dualism. I hope everyone is as excited as I am…

Those were by no means the only reasons dualism failed, so if you have any arguments for dualism or against anything I said… Let ‘em loose!

Link to Phineas Gage case.
Had what was essentially a crowbar shot through his skull and survived. Read on to see the gory details!
http://neurophilosophy.wordpress.com/2006/12/04/the-incredible-case-of-phineas-gage/

%d bloggers like this: