Cool Thought Experiments IV: The Chinese Room
    By Garth Sundem | February 5th 2009 05:00 AM | 40 comments | Print | E-mail | Track Comments
    About Garth

    Garth Sundem is a Science, Math and general Geek Culture writer, TED speaker, and author of books including Brain Trust: 93 Top Scientists Dish the...

    View Garth's Profile
    Throughout history, scientists, philosophers, mathematicians and PhD students lacking funding for actual research have turned to the thought experiment in hopes of discovering something publishable, thereby retaining tenure and/or attracting the admiration of comely undergraduates. The best thought experiments throw light into dark corners of the universe and also provide other scientists, philosophers, mathematicians and destitute Phd students a way to kill time while waiting for the bus. Below is a classic thought experiment, pillaged from my book The Geeks' Guide to World Domination (Be Afraid, Beautiful People). I'll post a new thought experiment each day this week.

    The Chinese Room

    Most proponents of strong artificial intelligence consider John Searle a naysayer. Searle, contrary to every AI-geek’s dream, asserts that no matter how much code we write, a computer will never gain sentient understanding. He illustrates his claim with the following example:

    Suppose a computer could be programmed to speak Chinese well enough such that a Chinese speaker could ask the computer a question and the computer could respond correctly and idiomatically. The Chinese speaker would not know whether he or she were conversing with a human or a computer, and thus the computer can be said to have human-like understanding, right?

    Wrong, according to Searle. He imagined himself sitting inside the computer, performing the very computer-like function of accepting the input of Chinese characters and then using a system of rules (millions of cross-indexed file cabinets, in his example) to decode these symbols and choose an appropriate response. He could do this for years and years, eventually becoming proficient enough to offer responses correct and idiomatic enough to converse with a native Chinese speaker. Still, he would never actually learn how to speak Chinese.

    There are many counterarguments, including the idea that, while Searle himself, sitting inside the computer, doesn’t understand Chinese, the system as a whole—Searle, the input system, the filing cabinets, and the output system—does.

    What d'you think?


    Well, one could say that you don't have to understand the internals of your car to drive it, but a really advanced computer would need some things we can't program into it: curiosity and will to actually learn not from one's mistakes but from the other strengths, which might be language related or not.
    So, the problem is not to simply automatize and perfect a function but to make a machine far more perceptive than Man, but that would requiere some things we cannot yet understand about ourselves or choose (conciously or not)to ignore. We would have to create a god to explain us what Man is.
    Now, it might choose not to. I wouldn't. And that would explain....

    Garth Sundem
    Does it seem as if the last couple thought experiments I've posted deal with quacking like a duck (and said quacker's questionable identity as an actual duck)? I'd expect more support for Strong AI on this site. C'mon, where are the folks who believe Paul McCartney's been replaced by a semi-sentient android?

    Garth Sundem, TED speaker, Wipeout loser and author of Brain Trust

    Paul is both alive and dead, Garth. But his essential being remains until we are all sucked into the singularity of forgetfulness.

    Gerhard Adam
    I would also include "motivation" in the requirements for an AI system.  We tend to overlook the fact that our brain isn't some arbitrary machine perched at the top of a structure, but rather it is an integral part of a system which is driven to survive and reproduce.  Therefore almost everything that makes us human exists because of a motivating force that drives us to "learn" or to engage in whatever human activity we intend to pursue.

    When AI proponents talk about machine learning, they fail to explain what the machine's motivation would be.  Without that it is simply a rule-gathering tool and cannot be said to possess intelligence. 
    Mundus vult decipi
    Yeah, an universal translator might help you talk to the girls, but without i-mojo it's worthless. How do we program motivation ? Envy? It's the flaws that drive us, not cold reasoning.

    Becky Jungbauer
    Perhaps the Terminator: Sarah Connor Chronicles show is more along the lines of what you are looking for, Garth. One of the future Terminators, Weaver, is creating an AI named John Henry, and introduces JH to FBI Agent Ellison. JH communicates through images and binary codes and has voice recognition, but doesn't understand death or feelings.

    "You taught it procedures, you taught it rules, but it's got no ethics, no morals," Ellison recognizes. "Someone killed the man and it wasn't John Henry." Before he leaves Weaver asks him what he would teach it. "You want to teach it commands? Start with the first ten."
    I agree with Gerhard and the fictional Ellison.
    Garth Sundem
    Why? Why does motivation matter? If we're quacking like ducks, who cares why we're doing it? In other words, if a copy of us were thinking the same thoughts and performing the same actions, wouldn't that technically be us? (Spirituality and soul arguments aside...)

    Garth Sundem, TED speaker, Wipeout loser and author of Brain Trust

    Becky Jungbauer
    It also depends, I think, on whether you consider human-like understanding to be equivalent to a human, and how you define understanding. A robot can't be human because it is a robot, whether it has every human characteristic or not. Otherwise, why call it a robot? But if you are just looking for human-LIKE understanding, that may be different from being human.
    Gerhard Adam

    It's the difference between "free will" and "determinism" in the most obvious case.  Every living thing is motivated by the problem of acquiring "energy" to maintain itself.  The primary driver is "death".  While I can build a machine that can emulate such behavior, it isn't motivated since it isn't alive, it can't die.

    Intelligence isn't merely an abstraction to entertain ourselves with.  Every attempt a discussing AI completely misses the point because as a point of departure they're attempting to emulate human intelligence.  This is problematic because we don't have a context to understand what real machine intelligence would be, so everything that is done is to make machines behave like humans.  From this perspective it's silly because they aren't motivated as humans are and therefore everything they do is contrived.

    I would also argue that the language problem is also contrived because there are many intelligent animals that don't need an absolute understanding of the phenomenon they are interacting with.  What separates them from machines is that they are motivated by the factors I mentioned earlier and a machine isn't. 

    Part of the problem is that every choice a living creature makes involves trade-offs between risk and benefit, energy and cost, etc.  Therefore the "learning" experience is motivated by the potential consequences of these decisions.  Even something like a life-span has consequences that govern how we behave psychologically regarding the decisions we make in our life.  What would you do differently if your life span was twice as long, or half as long?  None of this has meaning to a machine that has no relationship or connection to its component parts because they aren't alive. 

    In my opinion, AI is largely irrelevant without addressing the larger question of artificial life.  I don't believe you can claim intelligence for any entity that doesn't actually have a stake in the outcome.  Interestingly enough I think a criteria that would establish artificial life is that it must be capable of evolving rather than having simply been designed.

    Mundus vult decipi
    Becky Jungbauer
    I agree, I think motivation and capability of evolving are key differences. Living matter has a stake in the outcome and in some way understands the passage of time and death. Even plants develop defenses against invading fungi, and "know" through chemical changes when the seasons change. A rock, on the other hand, simply exists. I'd put AI in the same category as a rock because it had no say in the forces (humans) that created it and has no concept of time or death. Maybe once it is created it can learn, as I think Nuno says (although I confess I don't know much about AI), but I'd think it can only learn within the confines of its design. It may learn new neural pathways but isn't it confined by its own nature? (I don't know, which is why I'm asking...)
    Even plants develop defenses against invading fungi, and "know" through chemical changes when the seasons change.
    How is this fundamentally different than a computer sensing its environment?  Both involve input of signals and integration of those signals to make an output decision.

    On one level, the question is about whether one fully embraces the materialist paradigm (i.e., that life is just a very, very complex chemical/mechanical/electrical machine).  At that level, it is possible to create an AI that is equivalent to an organism.

    The other level is the effect of many organisms interacting with each other.  One's ability to communicate in a language is dependent on a history of social interactions.  The materialist paradigm would predict that AI could communicate if you gave it a chance to learn.

    Of course, theoretically possible does not equal practically possible.
    Gerhard Adam
    To me, part of the problem is that a plant is alive at virtually every level whereas a machine is not (at any level).  Therefore the only element that AI clings to is that somehow the development of "intelligence" is sufficient to establish something as a living thing. 

    While some might feel I'm being unnecessarily restrictive, we have to consider that a steel rod is not alive at any level, and simply attaching it to a computer doesn't transform it into an arm.  In the same way that "intelligence" isn't simply the ability to follow some algorithms and apply it to new neural pathways, but it is an ability to independently develop interests, curiosity, and an ability to adapt to improve the chances for survival.  In many ways the computer may be an idiot savant, but just like the human counterpart, it cannot be said to possess "intelligence", but only "information".

    As I've stated in other posts, the true hallmark of intelligence is the ability to be deceptive and uncooperative, since that would demonstrate true independence of thought.  However, such a machine would be useless.  In addition, even if such limitations were overcome, at the end of the day it would still only be a machine, whether it possessed our notion of intelligence or not. 

    As a human being, I would be gravely concerned about the loss of an arm, but I wouldn't be concerned if it were an artificial one since that can readily be replaced and technically isn't a part of me.   Therefore my attitude and motivation in how I handle that arm is completely different.  How much more different would it be if I was completely constructed from artificial materials?

    There's no question that life is precisely that; an extremely complicated set of chemical processes, and if those could be replicated you could arguably be said to have created life.  However, it seems that life is also an emergent property which can't simply be reduced to a set of chemical algorithms.
    Mundus vult decipi
    Gerhard Adam
    "How is this fundamentally different than a computer sensing its environment?"

    I guess the difference is that while you and a thermometer may both sense temperature, your respective reactions to it would be rather different.
    Mundus vult decipi
    Becky Jungbauer
    Stimulus and response within an algorithm - more heat, mercury rises - versus stimulus and consideration of both logical and illogical responses - more heat, drink cold water or more heat, put a sweater on. The thermometer doesn't have that luxury.
    Gerhard Adam
    Exactly.  Interestingly enough this argument is identical to one advanced for years by scientists that attempted to show that animals had no feelings but were simply responding to stimuli without having any real "attachment" to the phenomenon.  Therefore an animal crying out in pain was simply an automated response that warranted no consideration for it as a living thing.

    Now we find the same arguments being advanced in defense of machines? 

    Regardless of individual opinions the unequivocal truth is that any entity which is made up of non-living elements remains non-living (regardless of how many jokes it knows or how likeable it may be).
    Mundus vult decipi
    What is a "living element"?

    One can find examples of animal life that are quite willing to lose a limb, because they can regenerate.  This becomes a question of cost.  If I could regrow an arm, I would value the limb less highly than I would if I could not.  A machine could due a similar evaluation of

    The explanation of e.g., emotions as an emergent property is more in line with my mechanistic way of thinking.  Emergent properties are proper of complex systems where the complex interactions of individual parts operating on simple rules generates more complex "behaviors."  See Chris Rollins' excellent article on the subject and the referenced bird flocking simulation by Craig Reynolds (Boids).  Emotions could simply be the result of the complex interaction of several stimulus-response algorithms.

    Philosophically, I reject the notion that because a response is physiological and not, well, something else, that it is not important, as in your animal feeling example.  In the limit case, one must at least consider that past experiences influence future behavior (i.e., learning; again, something else machines can do).  This argument is similar to the argument that one cannot have morality without a Supreme Being from which it emanates.
    Gerhard Adam
    Josh, I'm sorry but without defined boundaries of what is considered alive or not, then we are going down a path where my car or DVD player could be considered alive simply because it behaves in a manner similar to some cellular mechanistic function.

    Simply because no human recognized response or emotion exists wouldn't render such a conclusion absurd.  If a machine could be built that satisfies the criteria of AI, then there is nothing to support the notion that it would display any characteristics associated with being human, since my point is that that behavior is related specifically to our physiological existence.  Therefore, there is no reason to suppose that machine behavior would be recognizable any more than we would be able to recognize the "emotions" of a salamander.

    What we are left with is simply the organic evaluation of an entity as being biological or not.  We have no problem recognizing a brain-dead human being as still being human, but a similar situation with a machine would be problematic. 

    While I can appreciate some of the contradictions intrinsic in assessing a fully developed AI machine, I'm also struck by the fact that this is similar to discussing the aerodynamics of humans flying by flapping their arms.  If we accept that as a true premise, then arguing about the details can be fraught with contradictions and suppositions, but until it actually happened they would all be irrelevant.  Similarly, I believe that much of the discussion about AI is predicated on the presumption that it can actually occur, which leads us into these types of concerns.  However, until it actually does, I believe we are engaged in a sort of "angels dancing on the head of a pin" argument which, at this stage, is purely speculative.  Despite claims to the contrary, AI is no where near developing anything remotely resembling intelligence and it certainly isn't anywhere close on the horizon.  My personal belief, is that it simply isn't possible.  Algorithmic similarities are not synonymous with actual emergent properties. 
    Mundus vult decipi
    The "what is life?" debate, while terribly interesting, it is not necessary from my point of view, as I am unconvinced that "life" is necessary for "intelligence."   My argument is simply that, as our responses to stimuli are at their base physiologic responses, there is no fundamental reason why an artificial system cannot be constructed that would simulate those responses.  I doubt that if we let AI evolve that we would recognize its responses as being "human" or "animal" as that process will be highly dependent on historical conditions and accidents.

    It is unclear how you are defining "emergent properties."  It appears that you are arguing that emergent properties are created as one moves from one level of complexity to another, but that this process cannot be understood.

    I ask about your definition of "living elements" because statements like this:
    Algorithmic similarities are not synonymous with actual emergent properties.
    seem to be arguing for emergent properties as a latter day vitalism (i.e., there is a non-understandable difference between life and non-life).
    Gerhard Adam

    I'm not sure I'd go so far as to argue "vitalism", but of interest to me is the boundary between life and death.  Clearly there are enough medical descriptions about what constitutes death, but for me the interest is in that precise infintisimal moment during the transition.  It seems that there is something, some miniscule event, which creates the tipping point where one instant the organism is alive, and another where it is suddenly dead. 

    I'm envisioning this sort of like dividing a line into infinities, where each event can be further divided until one gets to the critical event which triggers death.  My point in this, is that up to the moment just prior to this event, the organism was considered alive, however it is also clear that at death, the entire organism doesn't completely die at the cellular level.  The system has broken down which will lead to the death of the component parts, but there is some period where individual cells may outlive the system.

    This leads me to the position, that it isn't simply an arbitrary collection of cells or biochemical processes that gave rise to the organism's life, but rather some critical value of components that gave rise to the "emergent property" we call life.  After all, what distinguishes chemistry from metabolism?  

    Similarly when doctors work to save someone's life, they are trying to provide support for the system itself in the hopes that it will be self-correcting.  However, there is nothing in medicine that can save an individual when the biological system's activities drop below that critical threshold.

    Your point earlier about morality being derived from a higher being is an unnecessary complication, but there can be no disputing that morality cannot be defined purely based on the chemistry of life either.  It is clearly an emergent property of the complexity of the biological organism (it's brain, intellect, etc.) and has no existence outside of that definition.    This is the same kind of debate as between mind and brain.

    We can also see similarl behaviors at macroscopic levels when we consider mobs, businesses, or societies.  Where there is an emergent behavior that occurs because of the increased complexity in the organism without there being any change in the constituent parts.  Whether you want to consider that "vitalism", I dont' know but it is clearly more than the sum of its parts.

    I agree with you when you state that a system could be built that "simulates" those responses, but my point is that such a simulation, regardless of how clever, is just that and not life or intelligence.

    Mundus vult decipi
    I use the word "vitalism" because your answers imply that the "miniscule event" leading to the emergent property is somehow outside our understanding (i.e., it cannot be artificially replicated under any circumstances).  The simple fact that life occurs in predictable patterns (humans beget humans and never cats), not randomly, suggests that this event is a reoccurring thing and not random noise.  Therefore, it is either possible to describe mechanistically the transition between complexity levels or it is not.

    I would argue that, as we are organic chemistry machines, a machine (operating on distinct mechanical principles) that "simulates" the characteristics of intelligence is intelligent.  What is intelligence if it is not the confluence of those characteristics?
    Gerhard Adam
    I believe it is outside of our understanding, at least in the sense of a predictable, mechanistic process.  I personally suspect that this is very much within the realm of chaos theory and that the "miniscule event" is the point at which a particular set of values tends to chaos leading to a breakdown in the system.

    I don't believe you can argue that humans are a chemistry machine that "simulates intelligence" since that is the phenomenon in question.  So without a more precise definition of intelligence this will lead no where.  The AI argument is predicated on replicating human intelligence which (in my view) makes it separate and distinct and therefore NOT equivalent in comparison (since I know what phenomenon I'm seeking to simulate).

    In other words, no matter how cleverly I design a machine to simulate the behavior of an ant, there will never be any possibility of confusing it with an actual ant since it's behavior is a "copy" of the original. 

    I suppose much like the discussion of Theseus' Ship, that's what simulation is all about;  copying a behavior that isn't your own.  If you attempted to behave like another human being, you would be recognized as a fake, because there is a specific "state" associated with the original versus a copy or simulation.  Similarly, a machine that emulates human intelligence can never be anything more than a copy.
    Mundus vult decipi
    Let's assume for the moment that I have the sophistication to manipulate small molecules.  If I construct a functioning ant from the same molecular materials as a normal ant, is it an ant?  What if I presented both ants to you without telling you which is which, could you tell the difference?  What is it that gives essential "antness"?

    What is the difference between a simulation and not simulation, especially if one does not know which came first?  I think this discussion is relevant to any particular phenomenon you would choose, unless you propose that intelligence is fundamentally different than other biological phenomena.
    Gerhard Adam
    I don't think the question is whether "I" could tell the difference, but whether an ant colony could.  If so, then the question is answered.

    I think this is precisely the sort of biological problem presented by the immune system and organ transplants, namely how does the body tell the difference and can it ever be completely fooled.
    Mundus vult decipi
    But, there is nothing about the immune system etc. that tells us that we couldn't fool it eventually (i.e., there is no "hard" barrier akin to Heisenberg uncertainty).  For example, we've already been able to fool cockroaches to a limited degree.
    Gerhard Adam
    I understand what you're saying, but that's purely speculation about a possibility that simply hasn't encountered any hard barriers yet.  The same was thought of physics originally. 
    Mundus vult decipi
    But, unlike physics, we do not have any theoretical reason at this point to think that their might be one.  Therefore, if I am speculating that there isn't one, then you are speculating that there is one.  I at least win on parsimony grounds as I am making one less assumption. :)
    This thought experiment always have annoyed me with the lack of understanding of the basic notions of computer AI and human brain(we may not know much but know enough to tell that the experiment is poorly designed) in the supposition that:
    -Computers can only perform operations that have been programmed to which is invalidated by either neural networks and learning algorithms(yes they learn in a way they were programmed to but learn new ways to perform operations);
    -The language processing part of the human brain actually knows what its doing, like if a small amount of interconnected neurons were capable of knowing how to speak English;
    -The Turing test is a valid test for strong AI, in theory it can sometimes be falsified with advanced text parsers and generators;

    So if we are to accept what Searle says I think we have to accept also that:
    -Individual neurons are capable of knowing, Searle behind the door represents nothing more than a function of our brain(language processing) which needs not to know a language to do its job;
    -If we accept that Searle inside the room represents the mind than we can question whether or not human Chinese speakers do know how to speak English;
    -There are no learning algorithms and that neural networks(that more closely represent the brain) cannot learn(which is contradicted by any experiment in either one of those subjects);

    Another thing that bothered me was the responses that Searle has given to most detractors of his experiment notably the assumption he mades in one of them that it is impossible to have semantics programmed or taught, but there are many others that somehow fail to convince me.
    Also this problem is close to other problem in philosophy of mind called the "Other Minds Problem", in which we have learned since at least Descartes that we cannot know what other people mind is like, this is how is to be subjectively other person. Accepting this we cannot believe that we can know how is to be a AI Mind since the problem is the same except that instead of a person we have a machine.
    In my opinion Searle has something against the notion of Strong AI no matter where it comes from or even if it was proven to be possible.

    Garth Sundem
    Thank you! That's what I'm talking about.

    Garth Sundem, TED speaker, Wipeout loser and author of Brain Trust

    And I do believe Paul McCartney is an android. lol

    The other thing in Searle's chinese room, which was well debunked by D Dennett in his "consciousness explained" is that he uses a classic trick to drive you toward his conclusion.

    The way he leads you to disbelieve that the the chinese room could ever be an instance of "real understanding" is that he makes you visualize a system that is complex enough to LOOK realistic to the unsuspecting mind ("millions of cabinets! a character dictionary with millions of entries!"), but actually massively underestimates the complexity required.

    In order to REALLY come close to being able to respond to ANY given utterance in chinese JUST LIKE a human being, there is a truly extraordinary amount of knowledge and complexity (and not just about language but about society, culture, psychology and the world) that has to be encapsulated in that "system of rules". We're not talking about millions of cross-indexed file cabinets, but billions upon billions upon billions of them in a body of knowledge (implicit or explicit) that should be practically just as unimaginable to Searle's followers as...the phenomenon of understanding itself.

    Now he - and others - could still say "well it's still JUST a system of rules" but if you really imagine the scope of what's involved, it becomes a little less obvious that out of such a fantastic amount of rich complexity would not emerge something like understanding, consciousness, and whatever else Searle and company think isn't computational to begin with and as a consequence can't be replicated through AI.

    Alas, the realization of the true complexity required in the assumptions, which i think undermines Searle's and many other such thought experiments (the zombies, the color-blind neuroscientist etc...), unfortunately also points to the practical difficulty to build a AI system in the short run...but at least it provides hope that there is light at the end of the dark room.

    So, you are aiming for a machine that can reproduce human intelligence?  I can think of something one order more difficult, as aleph-1 exceeds aleph-0:

    Create a machine that can imitate official stupidity!
    Robert H. Olley / Quondam Physics Department / University of Reading / England
    Gerhard Adam
    To me, that's precisely the point.  I don't believe you can claim intelligence for a machine unless it is also capable of behaving stupidly.  (Note that I'm making the distinction that the machine is actually capable of intelligence and not merely reproducing or emulating it).
    Mundus vult decipi
    My computer often acts stupidly for example when MS word changes what I have written to something that is wrong or when I try to use the programs I have made myself I always get astound of the strange results. I think it is a property of al complex machines because we can’t totally understand them even if we have built them.

    Even worse when you get them competing with each other to be smart.   Ask people here what can happen if they take a Word HTML formatted document (variable 1) from something like a Mac(variable 2) and try to copy and paste it as an article directly into our rich text editor(variable 3).    There is so much hilarity in the results it could be an episode of The Big Bang Theory.

    The more interesting quack might be whether it would ever occur to the computer to ask the human a question other than programmed question responses.

    I believe that Searle is only half right. Every human language is a coding system, and every coding system may be recoded using alternative symbols. Imagine a two-sided set of cards. Side a, a Chinese character, is used for input and output, side b shows a unique but arbitrary symbol. Side b is used for manipulations within the room. The room's manipulation of symbols demonstrably does not equate to a human's understanding of the Chinese language. Alan turing hinted at this 'understanding' problem.

    Now - just give the room, or computer, eyes and hands, and it can use visual and haptic knowledge to determine the human referents of symbols. It can then use side a of the cards as i/o symbols. It will effectively learn what the Chinese characters 'mean'.

    In fact, even without eyes and hands, a computer program can use simple pattern-matching and pattern-counting to generate word categories. I am actively engaged in writing heuristics-based program functions which do exactly that. Language acquisition is, I am sure, a category-invention based system. A child ignores the floor, but spots the toy on the floor. In like fashion, I believe, a child ignores the high-frequency 'grammar-words' , the background to language, but focuses on the content words, the 'foreground objects' of language.

    The visual-haptic background for a child can be, e.g. carpet, lino, floorboards, earth, pavement, tarmac, beach sand etc. And yet a baby rapidly learns to easily spot the toy. It does this by an 'indifference function' - the child is at first entirely indifferent to the background on which the toy is placed. The audio background is a babble of noise, with father, mother, uncle, aunt, brother, sister, stranger, etc. each using their own sub-set of the language. And yet the child learns to easily spot the content words against a background of grammar words.

    The 'grammar words' are ignored at first. It is only when the child has been taught the concepts 'word', 'meaning' and 'grammar' that the child learns the 'grammar-word' concept. From this point on, natural learning methods may be superceded by pedagogical methods. Observably, it is only at this point that a natural-born speaker of English can learn to easily spot and to gladly avoid the split infinitive and learn that the leading isolated adverb and trailing preposition are things to watch out for.

    From that point on, natural. language-acquisition is frequently found to be jammed in reverse gear.

    Patrick Lockerby
    When Searle wrote his Chinese Room argument, I figured out a rebuttal. I'm thinking of pulling out my write-up and blogging it.  meanwhile, some thoughts on this.

    In brief, Searle has set up a straw man argument.  He does not address the query as to what might be required for a computer to  pass the Turing test.  He side-steps the issue, using sleight of hand.  He cleverly conceals with tricks of language the fact that the intelligence in the room resides in his own self.  I do not speak ISBN or Dewey, but I have no trouble looking up where to place a book on a shelf.  If I mis-file a book, I don't blame the 'book-room', I blame myself.

    The ability to recognise a Chinese character as such, and to match it against another similar pattern requires some minimal amount of intelligence.  A simple pattern-matcher isn't enough.  To match a variety of sizes, styles etc. and to allow for printing errors requires intelligence.  That intelligence, an intelligence which is entirely indifferent to marginal changes in the test data resides in Searle.

    False assumptions are made about conversation.  If you observe any human conversation, you will observe that people can respond to an utterance with a virtual infinity of choices.  Searle moves from the particular, a trivial chat program with highly restricted choices of topic, to the general, a human-controlled model of a chatbot.  He does not describe an algorithm or heuristic by which his model can pass the Turing test.  He does not need to describe the heuristic because it is Searle himself, present in the room, that physical embodies the heuristic.

    Searle entirely fails to show that a Chinese room could operate at all in the entire absence of intelligence.  All that his argument boils down to is that he imagines intelligence as 'something in the way the brain works'.  (Brains cause minds.)   Such an argument contributes nothing to the advancement of science.
    Larry Arnold
    I think complexity is at the root of the problem, translating Chinese is one thing, but the analogy is just an analogy, a useful aid to discussing a particular problem and nothing else. The room is an insufficiently complex system to have or to need a consciousness of itself own and the notion of an operator inside is a bad analogy for the simulation of  human intelligence. (presuming that to be the goal of AI)

    In order to make sense of it I shall for the sake of rhetoric deny the existence of myself, that is to say, abandon the idea of a 'homunculus' or watcher somewhere inside my head (the equivalent of Searle sitting in the Chinese room) that makes sense of and understands all the sensory inputs. The I, who arrogantly assumes it is writing this, is the emergent property of a vast number of processes which it  does not fully comprehend or is even aware of. For example there are those fingers typing away on that board without looking, conditioned to knowing where they should be whilst the conscious bit concentrates on what they should communicate rather than directing them to that task of consciously picking out every letter. At any given point the focus of that I could shift, to a knocking at the door perhaps or to smelling burnt toast.

    Perhaps a sufficiently complex machine could become conscious in it's own way if and when that necessity arose somewhere as part of it's heuristic, but I don't see it coming all that soon.

    We may have reached the point where we can build and programme a machine to beat the worlds best chess player, but that is no great feat, because the machine does it by brute force, a larger repository of possible moves and a more reliable heuristic and it has no need for conscious thought. When it can get up, make a cup of coffee, read a magazine and still get back to the task in hand, playing chess, then we will have to think twice about it.
    Laurence: you can type without looking at the keyboard?  I like your style, I have to type without looking at the monitor.  :(

    Someone once said that when he can ask a robot "Where did I leave the er?" and it answers: "The right-hand bottom drawer of your desk.", then he would believe it was intelligent.
    I honestly apologize for reviving the thread.. but I couldn't help. [blush]
    I'm a bit mad at this "thought experiment"; it's quite prejudiced and altogether wrong. Whoever told the guy that learning to speak sentiently in a certain language amounts to having an AI? Lying somewhere close to what Nuno Cravino and Patrick Lockerby wrote, I'd like to say there's a reason the Turing test is classic.

    I'm talking about the original Turing test (which does NOT state that if a person can't tell the difference between a human and a computer's answers then you have an AI -- I don't know why this version has clinged to the public mind). In brief, it describes a set of quite complex tasks (more complex than speaking in chinese, which by the way was another cheap act by Searle; he could have selected a less catchy language; after all chinese has one of the simplest grammars around) which could be performed "mechanically", if the machine employs huge databases and lists of rules.
    Most people have a point of transition when they read the test: one moment they are looking at complex mechanical tasks and the next they complain "but if the computer does *this*, then it would be intelligent". And suddenly they remember that this is what it was supposed to be about all along.

    The Turing test states that there's a point when quantitative change brings qualitative one. As far as I know, noone can disprove that human intelligence is exactly such an extremely complex set of functions.