Banner
    David Chalmers And The Singularity That Will Probably Not Come
    By Massimo Pigliucci | November 6th 2009 08:46 AM | 27 comments | Print | E-mail | Track Comments
    About Massimo

    Massimo Pigliucci is Professor of Philosophy at the City University of New York.

    His research focuses on the structure of evolutionary

    ...

    View Massimo's Profile
    David Chalmers is a philosopher of mind, best known for his argument about the difficulty of what he termed the “hard problem” of consciousness, which he typically discusses by way of a thought experiment featuring zombies who act and talk exactly like humans, and yet have no conscious thought (I explained clearly what I think of that sort of thing in my essay on “The Zombification of Philosophy”).

    Yesterday I had the pleasure of seeing Chalmers in action live at the Graduate Center of the City University of New York. He didn’t talk about zombies, telling us instead his thoughts about the so-called Singularity, the alleged moment when artificial intelligence will surpass human intelligence, resulting in either all hell breaking loose or the next glorious stage in human evolution — depending on whether you typically see the glass as half empty or half full. The talk made clear to me what Chalmers’ problem is (other than his really bad hair cut): he reads too much science fiction, and is apparently unable to snap out of the necessary suspension of disbelief when he comes back to the real world. Let me explain.

    Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)

    Be that as it may, Chalmers went on to present his main argument for the Singularity, which goes something like this:

    1. There will soon be AI (i.e., Artificial Intelligence)
    2. There will then soon be a transition from AI to AI+
    3. There will then soon be a transition from AI+ to AI++

    Therefore, there will be AI++

    All three premises and the conclusion where followed by a parenthetical statement to the effect that each holds only “absent defeaters,” i.e., absent anything that may get in the way of any of the above.

    Chalmers was obviously very proud of his argument, but I got the sense that few people were impressed, and I certainly wasn’t. First off, he consistently refused to define what AI++, AI+, or even, for that matter, AI, actually mean. This, in a philosophy talk, is a pretty grave sin, because philosophical analysis doesn’t get off the ground unless we are reasonably clear on what it is that we are talking about. Indeed, much of philosophical analysis aims at clarifying concepts and their relations. You would have been hard pressed (and increasingly frustrated) in finding any philosophical analysis whatsoever in Chalmers’ talk.

    Second, Chalmers did not provide a single reason for any of his moves, simply stating each premise and adding that if AI is possible, then there is no reason to believe that AI+ (whatever that is) is not also possible, indeed likely, and so on. But, my friend, if you are making a novel claim, the burden of proof is on you to argue that there are positive reasons to think that what you are suggesting may be true, not on the rest of us to prove that it is not. Shifting the burden of proof is the oldest trick in the rhetorical toolbox, and not one that a self-respecting philosopher should deploy in front of his peers (or anywhere else, for that matter).

    Third, note the parenthetical disclaimer that any of the premises, as well as the conclusion, will not actually hold if a “defeater” gets in the way. When asked during the Q&A what he meant by defeaters, Chalmers pretty much said anything that humans or nature could throw at the development of artificial intelligence. But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to “X is true (unless something proves X not to be true).” Not that impressive.

    The other elephant in the room, of course, is the very concept of “intelligence,” artificial or human. This is a notoriously difficult concept to unpack, and even more so to measure quantitatively (which would be necessary to tell the difference between AI and AI+ or AI++). Several people noted this problem, including myself in the Q&A, but Chalmers cavalierly brushed it aside saying that his argument does not hinge on human intelligence, or computational power, or intelligence in a broader sense, but only on an unspecified quantity “G” which he quickly associated with an unspecified set of cognitive capacities through an equally unspecified mathematical mapping function (adding that “more work would have to be done” to flesh out such notion — no kidding). Really? But wait a minute, if we started this whole discussion about the Singularity using an argument based on extrapolation of computational power, shouldn’t our discussion be limited to computational power? (Which, needless to say, is not at all the same as intelligence.) And if we are talking about AI, what on earth does the “I” stand for in there, if not intelligence — presumably of a human-like kind?

    In fact, the problem with the AI effort in general is that we have little progress to show after decades of attempts, likely for the very good reason that human intelligence is not algorithmic, at least not in the same sense in which computer programs are. I am most certainly not invoking mysticism or dualism here, I think that intelligence (and consciousness) are the result of the activity of a physical brain substrate, but the very fact that we can build machines with a degree of computing power and speed that greatly exceeds those of the human mind, and yet are nowhere near being “intelligent,” should make it pretty clear that the problem is not computing power or speed.

    After the deployment of the above mentioned highly questionable “argument,” things just got bizarre in Chalmers’ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment — thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.

    Which naturally raised the question of how do we control the Singularity and stop “them” from pushing us into extinction. Chalmers’ preferred solution is either to prevent the “leaking” of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it “leak” into our world? Like a Star Trek hologram gone nuts?)

    Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by “uploading” our neural structure (Chalmers’ recommendation is one neuron at a time) into the virtual intelligence — again, whatever that might mean.

    Finally, Chalmers — evidently troubled by his own mortality (well, who isn’t?) — expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesn’t think he will live long enough to actually see the Singularity happen. And that’s the only part of the talk on which we actually agreed.

    The reason I went on for so long about Chalmers’ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me I’ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.

    Comments

    Hey fellow readers for more articles on general sciences and amazing new technologies go to http://www.ediblemince.tk or http://www.bitterpotatoes.tk. And yes they are weird names for a scientific site, but i assure everything on them is absolutely factual.

    Thanks and remember please visit thos sites and make yourselves heard!!

    Your most interesting paragraph, here, is #12 ("In fact, the problem with the AI effort in general is …"). I wish you'd speculate on what quality of intelligence we might exploit, in engineering AI of the order generally imagined, if it's "not computing power or speed."

    Can't see AI happening soon at all, we haven't been very near programming anything with must intelligence and as
    you've said, we haven't improved much recently. We also haven't yet got the sort of silicon that could run human level
    intelligence in new real time, whats more the silicon we get when we can't shrink our chips any more, is still about
    a thousand times short of our lowest guess of what it would take to run a human equivalent brain, (as i discussed in my blog above).

    Well I had a well-thought reply, but then Chrome crashed, so....

    The data doesn't lie: http://en.wikipedia.org/wiki/Technological_singularity

    Surely you know the basic graphs are based on aspects like Moore's Law which has held true for over 40 years now. If we assume that the exponential growth of technology will continue (and outside cataclysmic destruction, I don't see why not)... then it is clear we will have computing power and speeds far exceeding that of the human brain, and entire human species within a few decades.

    Whether true AI is achieved or not, we will embed ourselves with this increased computing power (as we are already doing in basic forms). There is no "science fiction" about this.

    It really doesn't take too much lateral thinking to imagine the possibilities, and to then enact on them to turn them into realities.

    kerrjac
    Nice article. These sorts of critiques are valuable - not b/c they rag on people - but b/c they expose common fallacies of thought, which are often present in many people's mind at a less extreme level.

    Good point as well on Chalmers' inability to define intelligence; it makes it possible - if not fun - to run back through his argument and replace "AI" with any term you please.
    Andrea Kuszewski
    Your best, most profound, and true statement in this (terrific) article:

    In fact, the problem with the AI effort in general is that we have little progress to show after decades of attempts, likely for the very good reason that human intelligence is not algorithmic, at least not in the same sense in which computer programs are...

    Nice analysis. I agree with you, and debate AI and Computational Neuroscientists on this issue often.
    Interesting article. Obviously the interesting part is not how bad could Chalmers' speech be, but what your legitimate rebuttal of his arguments imply on his thesis.
    I fully agree that the burden of proof lies on the proposer of a thesis, and the way you report the talk suggests that Chalmers did a very poor job to that purpose. Still, I think that the burden of proof would be on you when stating that "intelligence is not algorithmic", and implying that therefore no algorithm can generate intelligence. It sounds to me like, for instance, Penrose's argument in "Shadows of Mind", which I believe to be fallacious, for reasons already pointed out by better thinkers than me.
    I would agree that mere computational brute force is not enough to yield intelligence, and that organization of the "brain" is key, but I think it is unfair to AI proponents to picture them as relying on Moore's law alone to get the job done. So, being skeptical is fine, yet being dismissive is perhaps unjustified.

    Roger Penrose has understood the issue and given his view. If you disagree, you must do better than simply call his arguments 'fallacious'. Why are they fallacious? A string of noughts and ones will never be conscious - is that dismissive of AI? Or just a true statement?

    Gerhard Adam
    I would agree that mere computational brute force is not enough to yield intelligence, and that organization of the "brain" is key, but I think it is unfair to AI proponents to picture them as relying on Moore's law alone to get the job done.
    Without a working definition of what intelligence actually is and how one determines that it has been achieved, what else is there to do but be dismissive.
    Mundus vult decipi
    vonankh
    Nice article, although I think it shows a lack of vision. I say this without ever having heard any of Chalmers presentation(s), but from what you are describing, it clearly shows that he's a visionary. Great! Interesting!

    However, you are giving an awful lot of critique to someone who obviously have been thinking about this topic for a long time, while at the same time, wanting to share his/their ideas. I don't know what kind of lectures are given at the Graduate Center of CUNY, but I could imagine those lectures are popular enough to be considered public. You surely know very well, yourself, how difficult it can be to present ideas that are to remain logically consistent in a fun and interesting way for the public. But then perhaps he's not experienced enough to leave out unclear logic traces and properly prepare for higher level questions.

    But, if there is something that surprises me, it must be your lack of vision. Some of the arguments you use to dismiss Chalmers ideas are nearly equally vague as his. (How can you disprove something you don't know what it is?) But that is already too personal. Let's just pretend we understand what he's saying. That:

    a) Would it be unreasonable to claim that there can be a future organism of construction that can be more intellectually developed than a common human of today?

    b) That if (a) happens, that unless humans also increase their intellectual abilities, they may have to compete with this organism?

    Thus as I see it, this is not a discussion about what is "intelligence" and neither that of what can (currently) be made from silicon. Both, completely irrelevant for the discussion above. Proof:

    1) Because no matter how well and how intelligent methods we use to define and measure "intelligence", there can always be someone more intelligent.

    2) The ridiculous discussion of "algorithmic intelligence" is so old school. It is based on a concept of computational devices from electronics childhood. Any serious computer scientist knows that this is ancient technology. Tomorrows devices (already today we see some) will tear down the distinction of electronics and biology, like the Berlin wall! It will all become one in certain sense, that electronics (i.e. semiconductors) will be mixed in with biological material. Therefore there will no longer be anything to "program" in the classical (computer science) sense. We will be able to construct organisms that behave in ways we would like them to behave, and hopefully not in ways that we cannot control. (Humanity is being a control freak.)

    3) Given that we remain within what we know today about our physical laws of the universe, we should be able to model the human brain functionality, on the fine grain biochemical scale, given that we have enough information on how these signal substances work. Would it then be too much to ask, to imagine that a larger model could behave in the same way as a complete human brain? It may not be immediately very exciting, but it could grow up, and that very quickly!

    Now, let's go back to Chalmers again.

    So what is this "leaking"? It simply means that we (as humans) are getting so dependent on our technological environmental that we can no longer live without it. Only your imagination can put limits on what this would men. Perhaps that we in the future implant some kind bio-electric devices in our heads to give us more of something. Perhaps memory, speed, logic or creativity. Or the opposite, if we connect our brains to a larger and more abstract cloud computing device. In either case, the existence would need to be in some kind of symbiosis. Which we should avoid. [See 2001 or Matrix for good examples.]

    "Unsubstantiated absurdity" is imagination. There are no definite answers as to what conditions created the primordial soup we believe we came from. Although it makes most scientific sense that it appeared spontaneously (which it also does), there's is nothing that says we are not the result of a grad-school student experiment in some ancient higher civilization. But that's religion at its worst and science fiction at its best.
    Gerhard Adam
    a) Would it be unreasonable to claim that there can be a future organism of construction that can be more intellectually developed than a common human of today?
    Yes, quite unreasonable without a working definition of what that means.  Anything else is in the realm of fantasy and science fiction.

    In fact I would argue that it is axiomatic that you cannot produce something more intelligent than yourself since, by definition, you would lack the intellect to make that determination (note I'm not talking about the equivalent of a few I.Q. points). 

    Since any true intelligence would require total freedom of thought, this would include the ability to lie and deceive.  Therefore, it would be impossible to know if something of higher intelligence was ever produced.
    Tomorrows devices (already today we see some) will tear down the distinction of electronics and biology, like the Berlin wall! It will all become one in certain sense, that electronics (i.e. semiconductors) will be mixed in with biological material. Therefore there will no longer be anything to "program" in the classical (computer science) sense. We will be able to construct organisms that behave in ways we would like them to behave, and hopefully not in ways that we cannot control.
    That's the fallacy, since you can never anticipate all the possible outcomes nor unforeseen consequences, the mere fact that you made this statement is about control and not what you can construct.

    Basically I think this will be possible the day we return from another galaxy at warp factor 9.
    Mundus vult decipi
    Yes, quite unreasonable without a working definition of what that means. Anything else is in the realm of fantasy and science fiction.

    I don't think this really hits the point. A comprehensive, bulletproof definition of intelligence is perhaps not easy to work out, but this is not necessarily a prerequisite to producing artificial intelligence. It would be necessary if we wanted to quantitatively measure intelligence in a fully objective way. However, the program of AI is about creating intelligence, not about making it a fully understood capability and one that can be measured through a standard, unquestionable procedure. This is a psychology task.

    The current status of the art is that intelligence is still to some extent a "folk" psychological concept. Yes, we have IQ tests. They are wildly debated, and many believe they are "unidimensional", failing to capture the complexity of intelligence (I don't agree, but this is irrelevant). But hardly anyone would argue that intelligence does not exist, that people have no capability to spot it in other beings, or even that what standard IQ tests measure has absolutely nothing to do with "real" intelligence.

    Therefore, if we were able to create "something" endowed with all the abilities (to learn, act, communicate) that we all "folks" associate to intelligence; and if this "something" were able to stand on an equal basis with us while discussing philosophy or gastronomy; and if this "something" scored 200 in Cattell IQ tests, well, I would say that the AI program has been successful, irrespective of the degree of maturity that the definition and measurement of intelligence may have reached by that time. Yes, it may be a "turingesque" opinion, but I see no scandal in it; who wants to dismiss AI because of the lack of a sound definition of intelligence should be prepared to maintain that the concept (and others similarly "flawed") must be dropped not only from philosophy (which AI is not), but also from all other disciplines, starting from psychology and sociology. Putting on AI alone the burden is not fair.

    The lack of a satisfactory definition of intelligence does not prevent us from acknowledging intelligence in other humans, and, in a smaller degree perhaps, in animals. I fail to see why it should prevent from acknowledging it in a different sort of being.

    Gerhard Adam
    The lack of a satisfactory definition of intelligence does not prevent us from acknowledging intelligence in other humans, and, in a smaller degree perhaps, in animals. I fail to see why it should prevent from acknowledging it in a different sort of being.
    Acknowledging it is one thing, but to attempt to create it is another.  Your entire description is based on emulating intelligence and not creating it.  In short, AI is doomed to fail because the only thing it can do is attempt to recreate human intelligence in a machine.  That makes as much sense as trying to teach a turtle how to fly.  Intelligence (such as it is) is an integral part of every organism's evolution and therefore it is explicitly linked with that organism's ability to survive as well as it's motivations for the decisions/choices it makes.

    The point is that AI considers that having a computer play chess is a correct approach to the problem, and fails to see that until the computer "CARES" that it wins, it is nothing but a big calculator. 

    In the end, without a working definition, there is no way to distinguish between emulation of intelligence and real intelligence.  Since the Turing test can only determine success in the former, then AI is pretty well set to chase it's tail for decades to come.
    Mundus vult decipi
    Your entire description is based on emulating intelligence and not creating it. In short, AI is doomed to fail because the only thing it can do is attempt to recreate human intelligence in a machine.

    This reminds me of John Searle. What is "emulating" intelligence? Having intelligent behavior without "real" intelligence? This leads to the next point:

    Intelligence (such as it is) is an integral part of every organism's evolution and therefore it is explicitly linked with that organism's ability to survive as well as it's motivations for the decisions/choices it makes

    This sounds like a petition of principle, if I'm correctly using English here: intelligence is something bound to organisms, therefore non-organisms cannot be intelligent.

    If, instead, this means that intelligent behavior is causally linked to the intelligent agent being operating a broad range of autonomous, non-obvious actions, and exchanging information in a complex environment, there is no reason why this should not be implementable in a machine.
    In other terms, intelligence can be:
    1- a metaphysical feature of living beings, which of course implies that AI is deemed to fail.
    2- a functional property of complex systems, connected to "sensorial" inputs, "proprioception", logical capabilities, memory, etc. In this case, AI is a reasonable idea.
    3- a non-replicable physical property of biologic neural systems. In this case, AI is a contradiction in terms.

    Possibility 1 is logically admissible, but would certainly carry with it a lot of "burden of the proof". Possibility 3, which is what I understand from what I've read of Searle, simply does not make any sense to me.
    Possibility 2 implies that there is no such thing as "emulating" intelligence, since a different implementation of intelligence would necessarily have a different material substrate, and on that substrate that would be intelligence.

    Is it meaningless to say that fish "breathe water"? Well, if one defines breathing as the process that takes place in lungs, of course fish don't breathe. If one defines breathing in a functional way, fish do breathe. And even if I were not aware of all the chemical reactions implied in breathing, I would still be able to parallel the function of water to a fish and the function of air to humans, which can be a useful way to look at things. In any case, if fish did not exist, and someone proposed a research program to create a being that "breathes water", there is only one meaning that I can attach to this purpose, and it does make sense to me.

    Gerhard Adam
    Your analogy to fish misses the point.  Just as flying in an airplane is flight, but it is not analogous to a bird's flight.

    Both depend on absolute laws of chemistry, physics, and their anatomy/physiology which can be understood and consequently comparable methods may be developed that allow adaptation to other forms.  SCUBA allows humans to breathe underwater, but it doesn't attempt to duplicate fish gills.

    Flying depends absolutely on understanding (without exception) the laws of aerodynamics.  SCUBA depends on the technology of the tank, as well as understanding the gas laws for humans to survive. 

    However when it comes to intelligence, the view seems to be that we can proceed without understanding what it is that is being attempted.  Intelligence isn't subject to a set of algorithms (although it may behave that way when necessary), just as it isn't about arbitrarily accumulating information or recalling vast amounts of data.  Intelligence exists for the sole purpose of ensuring that the animal possessing it has the tools available to make decisions that (hopefully) enhance its ability to survive.

    Human beings did not evolve to wrestle with quantum mechanics or mathematics.  While that is certainly a consequence of the brains they possess, it isn't the reason for its existence.  Therefore when AI attempts to build machines that perform these functions, they are simply emulating human desires in an electronic form.  The machine has no such motivations or needs.  Just as a calculator doesn't care what problem we put into it, neither does a machine care whether it wins or loses at chess.  Without that motivational or emotional link, you cannot claim intelligence, although you could argue that you've emulated human intelligence for a task-specific purpose.

    There is a vast difference between the two forms.  Biological organisms have to face the consequences of their decisions, sometimes with their lives.  The worst a machine will experience is an error code.  It's the fundamental difference between flying a real airplane and flying a simulator.  No matter how good you are in the simulator, there's a completely different world that exists when you're actually 30,000 feet in the air.
    Mundus vult decipi
    A simulator is not a plane, conceded. It does not fly (i.e. it does not handle "real world" issues).
    But an AI device could. An AI device could drive a real car, or plane, or speak to a real person, or post to a real (?) blog, so intelligent behavior would not be simulated, it would be real. But you maintain that even if behavior is indistinguishable from the real thing, some "essence" is.

    If you mean that intelligence cannot be fully developed without complex functional relations with other cognitive abilities, I tend to agree, at least as long as human-like intelligence is concerned. In this case AI would require a full set of cognitive abilities to be developed, not just "reasoning". If you mean that, I agree, but this does not necessarily make AI ridiculous.

    If you mean that biological intelligence "serves a special purpose" (i.e. survival), and therefore cannot exist in a non-living entity, this makes no sense. We could be inspired by a spider's web thread to make a very strong fiber, and of course the "purpose" of the fiber would have nothing to do with the purpose of the spider's web (which is again survival). The artificial fiber could have the same strength and flexibility of the spider's silk, i.e. the same external functional features, and we would be happy with that.
    If we were able to replicate the functional features of human intelligence, for instance by building "digital neural networks", the fact that the latter would serve our purposes instead of the natural selection would be completely immaterial.

    Gerhard Adam
    the fact that the latter would serve our purposes instead of the natural selection would be completely immaterial.
    But it wouldn't make it intelligent.  It would simply be an extension of our own desires and consequently an extension of our intelligence.  A calculator isn't smarter because it can add numbers faster, and neither would be a device that simply fulfills a human desire or intent.
    An AI device could drive a real car, or plane, or speak to a real person, or post to a real (?) blog, so intelligent behavior would not be simulated, it would be real.
    While it could do those things, it wouldn't be intelligent unless it was self-motivated to do so.  Anything else just makes it a sophisticated machine that is still doing our beckoning.  In other words, why would an AI device want to drive a car or speak to a person?  Those are the important questions.  If it doesn't have an intent, then it is simply a sophisticated simulator.

    Just as it is with humans, we may look at a painting and consider a work of genius, but we would never do so with a copy.  Regardless of the skill with which a forger applied himself, it could never be the same as the original act of "creation", because the forger is simulating someone else's idea.
    Mundus vult decipi
    Gerhard Adam
    As an additional consideration, imagine if you built a machine that was an exact replica of you in every way (other than that it wasn't biological; i.e. not a clone).  Further let's imagine that it was indistinguishable from you to an outside observer, so that it could hold conversations, reflect your behaviors, so that it would be impossible to tell that it wasn't actually you.

    So the question becomes;  is this intelligent behavior (i.e. is it you?) or is it emulated?

    The point is that there is never a circumstance where it would be impossible to tell the original from the copy upon close examination.  It might go out to dinner, but as a machine it wouldn't need to eat so it's behavior isn't authentic, but it is simulated.

    This is what I mean by self-motivated intelligence.  A machine cannot be motivated to behave like a human, it must behave like a machine to have the independence necessary to develop true intelligence.  However, we might not consider such behavior intelligent, since it wouldn't relate to the framework that we value. 

    BTW, rather than hijack this discussion, I've posted an article on the problems of machine intelligence where we can continue this discussion if you like.
    Mundus vult decipi
    vonankh




    First I'd to make one thing clear. You are not getting any of my cookies...even if you shoot me.

    Humans are already interfacing their brains with computers. A paraplegic fellow had one installed a short time ago and he can perform functions on a computer. So computers inside human skulls is very close to reality.

    However since science has not the slightest clue what intelligence is it seems really ignorant to say that within fifty years there will be artificial, or man-made, or inevitable machine intelligence.

    Computers are tools, very versatile tools. But that is all they are, autonomous devices that perform certain functions better than a man unaided. Why should these overblown calculators be intelligent? Because there have been minute increases in processing speed every few years? Because computers have more processing power more than a human brain. Computers are tools no matter how we use them. If there is a singularity coming chances are it is as mundane as the emergence of vehicles that don't sink in water (boats). AI is fake the science is unsound and the sooner those that work on AI come to this realization the sooner they can find better outlets for their immense creativity.

    Gerhard Adam
    Do you agree or disagree, that we will eventually be able to simulate every aspect of the neurons and their interconnections in a human brain?
    What does "eventually" mean in this question?  As a practical matter within the next 100 years I would say no.  We may come to a much greater understanding and even reach some degree of manipulative capability, but to simulate every aspect of their functioning .... no.
    Mundus vult decipi
    "I think that intelligence (and consciousness) are the result of the activity of a physical brain substrate, but the very fact that we can build machines with a degree of computing power and speed that greatly exceeds those of the human mind, and yet are nowhere near being “intelligent,” should make it pretty clear that the problem is not computing power or speed."

    Quite right. And even if we grant, arguendo, that a machine could somehow achieve consciousness, we'd still be left with the question, "What does it mean to say of an entity that it is conscious?" I've never understood how some could argue that endowing a machine with consciousness at all answers that question, or even gets us further along to answering that question. HAL is conscious in 2001...Does anyone really believe that the poignancy of his predicament--and the awful actions he feels forced to take--would be satisfactorily explained by an analysis of his circuity?

    The fact is that living things are fundamentally different to artificial things. There is no way a man-made object can ever be conscious - a dog is clearly conscious, a mouse is conscious, a fly is conscious at its own level. It is so obvious (to me at least) that the most advanced computers will only ever achieve an imitation of awareness. Not the real thing, like what I've got. yeah

    It seems to me like both sides of this argument are actually the same side of the one coin, with perhaps a disagreement on the specific meanings of what would actually be classified as AI and what 'creating' something entails.

    I think Chalmers does an excellent job of presenting a scenario that captures the process of evolution right up to our current time and then he takes it further through into AI.

    What he is saying sounds no different, to me, than saying Human Intelligence evolved from Chimp-like intelligence. We, humans, have evolved enough intellect to realize our own intellect [1] and to attempt to speed up the process of intellectual evolution to reach the next stage. The evolutionary process need not happen in the biological substrate that we are based in, because we are now capable of attempting to emulate our own intellect on an artificial substrate. It seems logical to me, that if we are able to emulate a human[2] intelligence onto an artificial substrate then that intelligence would have the capability and all the necessary 'urges' to evolve itself further, provided there is stimulus for that evolution. That intelligence would not be under the same limitations as intelligence that exists on the biological substrate - we have no idea how it would evolve, but it would either evolve or it would die[3] just as it happens in the real world. The stimuli for evolution in a virtual world would likely be controlled by us, unless there is two-way "leakage" as Chalmers suggests, in which case the stimuli from our own world would serve as additional stimuli for the artificial intelligence also[4].

    If the AI dies, we'd likely continue doing as we have been and keep trying. If it evolves into something else that is better than what we began with, -that- I think is the AI+ that Chalmers' is referring to and the whole concept of the singularity.

    The AI+ is simply a marker. To put it another way, I think it would be just as accurate to say that each and every human goes through a level of HI+ every day of their lives, simply because they're a little smarter, a little more learned, than they were the day before. The place where we choose to label the intellect increase is irrelevant to the process functioning.

    So, I think you're all really saying the same thing, but just arguing about the meanings of some of the words, which of course is very important for clarity, understanding, communication and gauging who is more right, but I don't think that it will have any influence on what is and is not possible -- nor do I think that we, humans will truly know, until we have exhausted trying all of the possibilities available to us, which, given an infinite time frame, we are going to do anyway. A catch all I know, but I guess my point is, why the argument when you're all talking about the same thing?

    [1]: Perhaps that's just another level of what it means to be self conscious.
    [2]: Actually it doesn't much matter if we start with an emulation of human intelligence or any other life form available to us: bacterial, dog, chimp, etc -- the possibilities are the same
    [3]: Death for a virtual intelligence would mean a cessation of any changes of self, though or further evolution. When the same thing happens in our biological world, we decompose due to the constant forces that are acting upon us -- In a virtual world, it would appear like things just stopped moving, or are stuck in an infinite loop.
    [4]: At this point we'd have effectively created new life .. or from the other perspective, its just evolved life like everything else on this planet.

    Gerhard Adam

    That intelligence would not be under the same limitations as intelligence that exists on the biological substrate - we have no idea how it would evolve, but it would either evolve or it would die[3] just as it happens in the real world.

    What does that even mean? 

    There is currently no ability to accurately define intelligence, so there's certainly no way to emulate it.  To suggest that it can evolve is a quantum leap that needs a bit of "fleshing out" if we're to make that assumption, while the final conclusion of "death" having no context.
    If the AI dies, we'd likely continue doing as we have been and keep trying. If it evolves into something else that is better than what we began with, -that- I think is the AI+ that Chalmers' is referring to and the whole concept of the singularity.

    Once again, this makes no sense.  What dies?  However, more to the point, it is simply pointless to think that any intelligence is capable of assessing what constitutes a "better" intelligence.  You cannot assess something at a higher intellectual level than you are.

    Mundus vult decipi
    Dear Pigliucci,

    I find it a most offensive notion that you are taking Chalmers seriously! You have discovered that he is a rhetorical baboon unable to state a good argument, or defend it honestly.

    However, you are making a mistake here. The argument for singularity must not be left to the worst philosopher of our time, that is Chalmers. Chalmers doesn't only know next to nothing about artificial intelligence, he does not have the faintest notion about computer theory. He is just making random, irrelevant remarks about a difficult subject, as usual. He probably wants to make a mess out of this as he did with philosophy of mind.

    If you would like to criticize the argument for singularity, you should read Ray Solomonoff's 1985 paper, the infinity point hypothesis:

    Ray Solomonoff, "The Time Scale of Artificial Intelligence: Reflections on Social Effects," Human Systems Management, Vol. 5, pp. 149-153, 1985
    http://world.std.com/~rjs/timesc.pdf

    Do not concern yourself with the infantile understanding of Chalmers. I assure you as an AI researcher that he says nothing of substance in that argument, that argument is a bad caricature of the real argument for singularity.

    The only point you should answer is: is it really possible that AI technology will accelerate miniaturization of computer hardware (Moore's law) further? Or that it can in any way improve the cost of computing substantially? (And as you rightly point out, whether human-level AI is truly feasible in the first place, and what exactly it requires) That's the real argument....

    You can also refer to the relevant section (right before conclusion) in a short paper that I wrote:
    http://arxiv.org/abs/1107.2788

    Cheers,

    Battlestar Galactica barely dealt with AI. Mostly it was just cloned humans dry humping & spouting nonsense written by someone that failed history class, not to mention a ton of plot holes. You had a good article going until then. Lucky thing it was at the end. If you want something really about AI, watch Terminator, any of the movies & the series. Which by the way deserved way more seasons that that awful *awful* BSG.