Banner
    Three And A Half Thought Experiments In Philosophy Of Mind
    By Massimo Pigliucci | October 16th 2013 07:30 AM | 10 comments | Print | E-mail | Track Comments
    About Massimo

    Massimo Pigliucci is Professor of Philosophy at the City University of New York.

    His research focuses on the structure of evolutionary

    ...

    View Massimo's Profile
    You can tell I've had philosophy of mind on my mind lately. I've written about the Computational Theory of Mind (albeit within the broadest context of a post on the difference between scientific theories and philosophical accounts), about computation and the Church-Turing thesis, and of course about why David Chalmers is wrong about the Singularity and mind uploading (in press in a new volume edited by Russell Blackford and Damien Broderick).

    Moreover, and without my prompting, my friend Steve Neumann has just written an essay for RS about what is it like to be a Nagel. Oh, and I recently reviewed for Amazon, John Searle's Mind: A Brief Introduction.

    But what prompted this new post is a conversation Julia Galef and I have recently had for a forthcoming episode of the Rationally Speaking podcast, a chat with guest Gerard O'Brien, a philosopher (of mind) at the University of Adelaide in Australia. It turns out that Gerard and I agree on more than I thought (even though he is sympathetic to the Computational Theory of Mind, but in such a partial and specific way that I can live with; moreover, he really disappointed Julia when he said that mind uploading ain't gonna happen, at the least not in the way crazy genius Ray Kurzweil and co. think it will).

    During our exchange, I was able to crystallize in my mind something that had bothered me for a while: why is it, exactly, that so many people just don't seem to get the point of John Searle's famous Chinese Room thought experiment? Gerard agreed both with my observation (i.e., a lot of the criticism seems to be directed at something else, rather than at what Searle is actually saying), and with my diagnosis (more on this in a moment). That in turn made me think about several other famous thought experiments in philosophy of mind, and what exactly they do or don't tell us - sometimes even regardless of what the authors of those experiments actually meant!

    So, below is a brief treatment of Searle's Chinese Room, Thomas Nagel's what is it like to be a bat?, David Chalmers' philosophical zombies, and Frank Jackson's Mary's Room. I realize these are likely all well known to my readers, but bear with me, I may have a thing or two of interest to say about the whole ensemble. (The reason I refer to 3.5, rather than 4, thought experiments in the title of the post is because I think Nagel's and Jackson's make precisely the same point, and are thus a bit redundant.) In each case I'll provide a brief summary of the argument, what the experiment shows, and what it doesn't show (often, contra popular opinion), with a brief comment about the difference between the latter two.

    1. The Chinese Room

    Synopsis: Imagine a room with you in the middle of it, and two slots on opposite sides. Through one slot someone from the outside slips in a piece of paper with a phrase in Chinese. You have no understanding of Chinese, but - helpfully - you do have a rule book at your disposal, which you can use to look up the symbols you have just received, and which tells you which symbols to write out in response. You dutifully oblige, sending the output slip through the second slot in the room.

    What it does mean: Searle's point is that all that is going on in the room is (syntactic) symbol manipulation, but no understanding (semantics). From the outside it looks like the room (or something inside it) actually understands Chinese (i.e., the Room would pass Turing's test), but the correct correspondence between inputs and outputs has been imported by way of the rule book, which was clearly written by someone who does understand Chinese. The idea, of course, is that the room works analogously to a digital computer, whose behavior appears to be intelligent (when seen from the outside), with that intelligence not being the result of the computer understanding anything, but rather of its ability to speedily execute a number of operations that have been programmed by someone else. Even if the computer, say, passes Turing's test, we still need to thank the programmer, not the computer itself.

    What it does not mean: The Chinese Room is not meant as a demonstration that thinking has nothing to do with computing, as Searle himself has clearly explained several times. It is, rather, meant to suggest that something is missing in the raw analogy between human minds and computers. It also doesn't mean that computers cannot behave intelligently. They clearly can and do (think of IBM's Deep Blue and Watson). Searle was concerned with consciousness, not intelligence, and the two are not at all the same thing: one can display intelligent behavior (as, say, plants do when they keep track of the sun's position with their leaves) and yet have no understanding of what's going on. However - obviously, I hope - understanding is not possible without intelligence.

    Further comments: I think the confusion here concerns the use of a number of terms which are not at all interchangeable. In particular, people shift among computing speed, intelligence, understanding, and consciousness while discussing the Chinese Room. Intelligence very likely does have to do (in part) with computing speed, which is why animals' behavior is so much more sophisticated than most plants', and why predators are usually in turn more sophisticated than herbivores (it takes more cunning to catch a moving prey than to chew on stationary plants). But consciousness, in the sense used here, is an awareness of what is going on, and not just a phenomenological awareness (as, say, in the case of an animal feeling pain), but an awareness based on understanding.

    The difference is perhaps more obvious when we think of the difference between, say, calculating the square root of a number (which any pocket calculator can do) and understanding what a square root is and how it functions in mathematical theory (which no computer existing today, regardless of how sophisticated it is, actually possesses).

    2. Mary's Room.

    Synopsis: Consider a very intelligent scientist - Mary - who has been held (somehow...) in an environment without color since her birth (forget the ethics, it's a thought experiment!). That is, throughout her existence, Mary has experienced the world in black and white. Now Mary is told, and completely understands, everything there is to know about the physical basis of color perception. One bright day, she is allowed to leave her room, thus seeing color for the first time. Nothing, argues Frank Jackson, can possibly prepare Mary for the actual phenomenological experience of color, regardless of how much scientific knowledge she had of it before hand.

    What it does mean: According to Jackson, this is a so-called "knowledge argument" against physicalism. It seems that the scientific (i.e., physicalist) understanding of color perception is simply insufficient for Mary to really understand what experiencing color is like, until she steps outside of her room, thus augmenting her theoretical (third person) knowledge with (first person) experience of color. Hence, physicalism is false (or at the least incomplete).

    What it does not mean: Contra Jackson, the experiment does not show that physicalism is wrong or incomplete. It simply shows that third person (scientific) descriptions and first person experiences are orthogonal to each other. To confuse them is to commit a category mistake, like asking the color of triangles and feeling smug for having said something really deep.

    Further comments: I have always felt uncomfortable about this sort of thought experiment because, quite frankly, I misunderstood them entirely the first few times I heard of them. It seemed obvious to me that what the authors meant to show (the orthogonality of third and first person "knowledge") was obviously true, so I didn't see what all the fuss was about. Turns out, instead, that the authors themselves are confused about what their own thought experiments show.

    3. What is it like to be a bat?

    Synopsis: Thomas Nagel invited us to imagine what it is like (in the sense of having the first person experience) to be a bat. His point was that - again - we cannot answer this question simply on the basis of a scientific (third person) description of how bats' brains work, regardless of how sophisticated and complete this description may be. The only way to know what it is like to be a bat is to actually be a bat. Therefore, physicalism is false, yadda yadda.

    What it does mean: Precisely the same thing that Jackson's Mary's Room does.

    What it does not mean: Precisely the same thing that Jackson's Mary's Room doesn't.

    Further comments: It's another category mistake. Actually, it's exactly the same category mistake.

    4. Philosophical zombies.

    Synopsis: David Chalmers has asked us to consider the possibility of creatures ("zombies," known as p-zombies, or philosophical zombies, to distinguish them from the regular horror movie variety) that from the outside behave exactly like us (including talking, reacting, etc.) and yet have no consciousness at all, i.e. they don't have phenomenal experience of what they are doing. You poke a zombie and it responds as if it were in pain, but there ain't no actual experience of pain "inside" his mind, since there is, in fact, no mind. Chalmers argues that this sort of creature is at least conceivable, i.e., it is logically possible, if perhaps not physically so. Hence..., yeah, you got it, physicalism is false or incomplete.

    What it does mean: Nothing. There is no positive point, in my opinion, that can be established by this thought experiment. Besides the fact that it is disputable whether p-zombies are indeed logically coherent (Dennett and others have argued in the negative), I maintain that it doesn't matter. Physicalism (the target of Chalmers' "attack") is not logically necessary, it is simply the best framework we have to explain the empirical evidence. And the empirical evidence (from neurobiology and developmental biology) tells us that p-zombies are physically impossible.

    What it does not mean: It doesn't mean what Chalmers and others think it does, i.e. a refutation of physicalism, for the reason just explained above. It continues to astonish me how many people take this thing seriously. This attitude is based on the same misguided idea that underlies Chalmers' experiment of course: that we can advance the study of consciousness by looking at logically coherent scenarios. We can't, because logic is far too loose a constraint on the world as it really is (and concerns us).

    If it weren't, the classic rationalistic program in philosophy - deriving knowledge of how things are by thinking really hard about them - would have succeeded. Instead, it went the way of the Dodo at least since Kant (with good help from Hume).

    Further comments: Consciousness is a bio-physical phenomenon, and as Searle has repeatedly pointed out, the answer to the mystery will come (if it will come) from empirical science, not from thought experiments. (At the moment, however, even neuroscientists have close to no idea of how consciousness is made possible by the systemic activity of the brain. They only know that that's what's going on.)

    So, what are we to make of all of the above? Well, what we don't want to make of it is either that thought experiments are useless or, more broadly, that philosophical analysis is useless. After all, what you just read is a philosophical analysis (did you notice? I didn't use any empirical data whatsoever!), and if it was helpful in clarifying your ideas, or even simply in providing you with further intellectual ammunition for continued debate, then it was useful. And thought experiments are, of course, not just the province of philosophy.

    They have a long and illustrious history in science (from Galileo to Newton) as well as in other branches of philosophy (e.g., in ethics, to challenge people's intuitions about runaway trolleys and the like), so we don't want to throw them out as a group too hastily.

    What we are left with are three types of thought experiments in philosophy of mind: (i) Those that do establish what their authors think (Chinese Room), even though this is a more limited conclusion than what its detractors think (the room doesn't understand Chinese, in the sense of being conscious of what it is doing; but it does behave intelligently, in proportion to its computational speed and the programmer's ability). (ii) Those that do not establish what their authors think (Mary and the bats), but nonetheless are useful (they make clear that third person description and first person experience are different kinds of "knowledge," and that it makes no sense to somehow subsume one into the other). (iii) Those that are, in fact, useless, or worse, pernicious (p-zombies) because they distract us from the real problem (what are the physical bases of consciousness?) by moving the discussion into a realm that simply doesn't add anything to it (what is or is not logically conceivable about consciousness?).

    That's it, folks! I hope it was worth your (conscious) attention, and that the above will lead to some better (third person) understanding of the issues at hand.

    Originally appeared on Rationally Speaking, Sept. 6th, 2013

    Comments

    "Contra Jackson, the experiment does not show that physicalism is wrong or incomplete. It simply shows that third person (scientific) descriptions and first person experiences are orthogonal to each other. To confuse them is to commit a category mistake, like asking the color of triangles"

    The two categories in your example are colour and shape. I understand that colour is to do with the energetic properties of a wave and its particles, and shape is to do with the relative spaciotemporal properties of a wave's particles. Is this what justifies your calling the separate categories, or something else?

    MikeCrow
    My opinion is that the problem isn't hardware per say, but how it's used.
    Humans start off with some hard wiring, but everything else is adaption of connections from external stimulus. I think programming intelligence will not be a simple task, but employing a more human type of learning process might.

    I have a example of this difference, I could roll a ball back and forth with all of my children before they were a year old, how many lines of code would it take to program a robot arm(s) to play catch?
    Never is a long time.
    The human brain is a human-experience recorder/simulator. To create a synthetic human would require capturing this 'human-experience'. with the ability to convey it sympathetically. A fair chunk of the human race are not so good at it. We communicate by creating a simulation of a desired experience. THat's how a 'gut response' happens. We re-enact what it would take for us to 'say that', 'look like that', 'be motivated to do that', etc. What is the size of a human experience vector ? If the experience is 'important' a huge swath of our personal history is part of the vector.

    Well the problem with Searle is that he grants that a computer could be programed to behave as a human. That is he believed in P-zombies. The problem then is there is no way to tell the difference between a p-zombie and a person. Even if there were some mysterious Searlian substance or process responsible for consciousness there would be no empirical method of recognizing it.

    Ok what happens if we decide that p-zombies are impossible? Well that violates Church/Turing. Well ok fine then show me this mysterious Searlian substance or process that violates C/T. People have been looking for a long time.

    Massimo's position that p-zombies don't matter is really just a refusal to acknowledge the above dilemma.

    Now Penrose at least understands this problem. He offers some possible places to look but it is all pretty unconvincing.

    On the other side of the argument is Descartes' "I think therefore I am". It is a powerful argument but it is the only argument. After all if we did not experience our thoughts we wouldn't be arguing about consciousness.

    Acknowledging third person and first person descriptions as orthogonal already points to a gap, whereby science can only answer for third person descriptions . Although not a widely acknowledged opinion, (as our empiricist friends ultimately aim to reduce the status of first-person to a mere epiphenomenon of the third person state )
    all "third person" descriptions are just a particular case of first person descriptions (those that "commute" between different instances of persons, and as such are shared) . I perceive the result of the experiment as 6. I can say he would perceive the result of the experiment as 6. (for all "he") .Therefore we can drop the quantifier and say the result of the experiment is 6 . As for the computational theory of mind, a Godelian argument shows that even if a mind A would be entirely the result of an algorithm B, then said mind would be incapable of comprehending algorithm B. Algorithms can have arbitrarily large complexity, whom is to decide along the scale exactly where (if anywhere) mind starts? Algorithms are unable to account for mathematical intuition, our ability to comprehend math beyond a mere formal sense (ie. memorize and apply step 1, step 2 ...) .Even the "physical basis" of the project is shaky . Algorithms and classical bits can be copied or deleted. Quantum bits, however, cannot .
    I do not understand what neuroscientists are trying to find. Even in principle, what would be the answer, as they peel away more and more physical layers. Would they find better algorithms?Perhaps. http://www.wired.com/wiredscience/2009/11/fly-eyes/ .But not the mind. I'd like to end with a passage from Leibniz (who gasped the importance of algorithmic/symbolic systems centuries before the computer):
    "It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist. "

    1. Chinese room:

    Surely the algorithm itself is actually conscious? To try and say the paper-pusher and paper itself doesn't understand chinese is of course true, but they are merely analogues of the sodium ions and electrons in our brain. Those particles don't understand language either. It's the processing itself that does. Any argument to the contrary must accept in some kind of non-physical soul, with the brain acting as a highly specific antenna or something.

    4. P-Zombies:

    Very similar to the chinese room. If the 'algorithm' running the p-zombie has all the properties of consciousness, then it is conscious. There can be no other test for consciousness. So P-Zombies can't exist.

    Thor Russell
    Dead right about the Chinese room, you can apply exactly the same argument to neurons then conclude they don't have some magic something, whatever it is. But you wouldn't get a job as a philosopher for pointing out the such thought experiments have outlived their usefulness, its much better to bring them up endlessly so you can right papers and blog posts. 
    Thor Russell
    Well yes as far as we know there is nothing special about neurons and so we could make the Chinese room argument with neurons. A note of caution here however. Discoveries like quantum mechanics and relativity have fundamentally changed our understanding of the universe in ways that would have seemed inconceivable before.

    But having said that it should be noted that if your theory of mind needs a relativity level change in our understanding of physics then at the very least you are making an extraordinary claim.

    There are two ways for p-zombies not to exist. The first is that any zombie that appears conscious will be conscious and so is not a zombie. The other is that fully human appearing zombies cannot be constructed without some special Searlian substance or process in which case again they are not zombies.

    The first violates our Descartian intuition. The second makes an extraordinary claim.

    If it's the algorithm that's conscious, why are no man-made algorithms conscious? Or are they? Does Deep Blue have subjective experiential consciousness? If not, why not?